Skip to main content
Log in

Adult Speech in Different Emotional States: Temporal and Spectral Features

  • ACOUSTICS OF LIVING SYSTEMS. BIOMEDICAL ACOUSTICS
  • Published:
Acoustical Physics Aims and scope Submit manuscript

Abstract

The aim of the study is to determine individual features of adult speech in different emotional states. The acoustic speech characteristics of 12 adult native Russian speakers were studied. The speech of informants uttering meaningless phrase in different emotional states was audio recorded: joy, anger, sadness, fear, and neutral. The temporal and spectral characteristics of speech were analyzed in the Cool Edit Pro sound editor. The maximum pitch range in male speech is revealed in phrases uttered in a neutral state and a state of joy; the minimum, in a state of sadness. For female speech, the maximum pitch range is in a state of joy and in a state of anger; the minimum, in a state of sadness and in a neutral state. The pitch range in female speech is larger than that in male speech. For seven informants, it was shown that the duration of utterances in a state of sadness was longer compared to other states, and in a state of joy, on the contrary, it was minimal. Both male and female utterances in a state of joy were characterized by maximum pitch range values; conversely, in a state of sadness, by minmum values. Pauses between words in utterances in a state of sadness were detected in both men and women. Thus, differences in the temporal and spectral characteristics of utterances in different emotional states are revealed. The individual features of the manifestation of the emotional state in the speech of adults are determined.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.

REFERENCES

  1. R. Schwartz and M. D. Pell, PLoS One 7 (10), e47279 (2012). https://doi.org/10.1371/journal.pone.0047279

    Article  ADS  Google Scholar 

  2. M. Papakostas, G. Siantikos, T. Giannakopoulos, E. Spyrou, and D. Sgouropoulos, Adv. Exp. Med. Biol. 989, 155 (2017). https://doi.org/10.1007/978-3-319-57348-9_13

    Article  Google Scholar 

  3. A. N. Velichko, O. V. Verkholyak, and A. A. Karpov, ProSpER – Program for Speech Emotion Recognition (2020).

  4. Y. Matveev, A. Matveev, O. Frolova, E. Lyakso, and N. Ruban, Mathematics 10, 2373 (2022). https://doi.org/10.3390/math10142373

    Article  Google Scholar 

  5. E. Lyakso, O. Frolova, E. Dmitrieva, and A. Grigorev, in Proc. SPECOM 2015: Speech and Computer, Ed. by A. Ronzhin, R. Potapova, and N. Fakotakis (Athens, 2015), Vol. 9319.

  6. V. I. Galunov, Acoust. Phys. 48 (6), 749 (2002).

    Article  ADS  Google Scholar 

  7. V. I. Galunov, Rechevye Tekhnol., No. 1, 60 (2008).

  8. V. I. Galunov, V. Kh. Manerov, and V. I. Tarasov, in Proc. Symp. Speech and Emotions (Leningrad, 1974), p. 79 [in Russian].

  9. V. I. Galunov and V. Kh. Manerov, in Cybernetic Problems (Moscow, 1976), No. 22, p. 95 [in Russian].

  10. V. P. Morozov, in Emotion Language, Brain and Computer. Computers and Their Application (Znanie, Moscow, 1989), Issue 9, p. 3 [in Russian].

  11. V. P. Morozov, Emotion Language and Ear for Emotion. Selected Works for 1964–2016 (Institute of Psychology of RAS, Moscow, 2017) [in Russian].

    Google Scholar 

  12. R. K. Potapova and V. V. Potapov, Language, Speech, Personality (Yazyki slavyanskoi kul’tury, Moscow, 2006) [in Russian].

  13. V. G. Mikhailov and L. V. Zlatoustova, Speech Parameters Measure (Radio i svyaz’, Moscow, 1987) [in Russian].

  14. V. N. Sorokin and A. S. Leonov, Acoust. Phys. 67 (2), 193 (2021).

    Article  ADS  Google Scholar 

  15. V. N. Sorokin and A. S. Leonov, Acoust. Phys. 68 (2), 187 (2022).

    Article  ADS  Google Scholar 

  16. E. E. Lyakso, O. V. Frolova, S. V. Grechanyi, Yu. N. Matveev, O. V. Verkholyak, and A. A. Karpov, in Voice Portrait of a Child with Typical and Atypical Development, Ed. by E. E. Lyakso and O. V. Frolova (Izd.-poligraf. Assotsiatsiya vysshykh uchebnykh zavedenii, 2020) [in Russian].

  17. D. M. Schuller and B. W. Schuller, Emotion Rev. 13 (1), 44 (2021). https://doi.org/10.1177/1754073919898526

    Article  Google Scholar 

  18. J. Singh, L. B. Saheer, and O. Faust, Int. J. Environ. Res. Public Health 20, 5140 (2023). https://doi.org/10.3390/ijerph20065140

    Article  Google Scholar 

  19. A. A. Dvoinikova, M. V. Markitantov, E. V. Ryumina, M. Yu. Uzdyaev, A. N. Velichko, D. A. Ryumin, E. E. Lyakso, and A. A. Karpov, Informatika Avtomatiz. 21 (6) (2022). https://doi.org/10.15622/ia.21.6.2

  20. D. Ververidis and K. Kotropoulos, Speech Commun. 48 (9), 1162 (2006).

    Article  Google Scholar 

  21. M. Xu, F. Zhang, and W. Zhang, IEEE Access 9, 74539 (2021).

    Article  Google Scholar 

  22. R. K. Potapova, Acoust. Phys. 48 (4), 486 (2002).

    Article  ADS  Google Scholar 

  23. E. Lyakso and O. Frolova, in Proc. Speech and Computer Conf. SPECOM 2015, Ed. by A. Ronzhin, R. Potapova, and N. Fakotakis (Springer, Cham, 2015), Vol. 9319.

  24. A. S. Grigorev, V. A. Gorodnyi, O. V. Frolova, A. M. Kondratenko, V. D. Dolgaya, and E. E. Lyakso, Neurosci. Behav. Phys. 50, 1224 (2020).

    Article  Google Scholar 

  25. E. Lyakso, O. Frolova, E. Kleshnev, N. Ruban, M. M. Mekala, and K. V. Arulalan, in Proc. Int. Conf. on Multimodal Interaction (ICMI’22 Companion) (New York, 2022), p. 201.

Download references

Funding

The study was supported by the Russian Science Foundation (project no. 22-45-02007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. V. Kurazhova.

Ethics declarations

CONFLICT OF INTEREST

As author of this work, I declare that I have no conflicts of interest.

ETHICS APPROVAL AND CONSENT TO PARTICIPATE

All studies were conducted in accordance with the principles of biomedical ethics as set out in the 1964 Declaration of Helsinki and its subsequent amendments. They were also approved by the Ethics Committee of St. Petersburg University (St. Petersburg), protocol no. 115-02-2 of July 4, 2022.

Each study participant provided voluntary written informed consent after receiving an explanation of the potential risks and benefits, as well as the procedure of the study.

Additional information

Publisher’s Note.

Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kurazhova, A.V. Adult Speech in Different Emotional States: Temporal and Spectral Features. Acoust. Phys. 70, 175–181 (2024). https://doi.org/10.1134/S1063771023601127

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S1063771023601127

Keywords:

Navigation