Preregistration

Predicting Affective States from Acoustic Voice Cues Collected with Smartphones

Author(s) / Creator(s)

Koch, Timo
Schoedel, Ramona

Abstract / Description

The expression and recognition of emotions (i.e., short-lived and directed representations of affective states) through the acoustic properties of speech is a unique feature of human communication (Weninger et al., 2013). Researchers have identified acoustic features, which are predictable of affective states, and emotion detecting algorithms have been developed (Schuller, 2018). However, most studies used speech data produced by actors, who had instructions to act out a given emotion, or speech samples labelled by raters, who were instructed to add affective labels to recorded utterances (e.g., from TV shows). Both, enacted and labelled speech, come with multiple downsides since these approaches assess expressed affect rather than the experience of actual affective states through voice. In this work, we want to investigate if we can predict in-situ self-reported affective states from objective voice parameters collected with smartphones in everyday life. Further, we want to explore which acoustic features are most predictive for the prediction of the experience of affective states. Finally, we want to analyze how the affective quality of instructed spoken language (e.g., a sentence with negative affective valence) translates into objective markers in the acoustic signal, which then in turn could alter the predictions in our models.

Persistent Identifier

PsychArchives acquisition timestamp

2021-01-07 10:19:21 UTC

Publisher

PsychArchives

Citation

Koch, T., & Schoedel, R. (2021). Predicting Affective States from Acoustic Voice Cues Collected with Smartphones. PsychArchives. https://doi.org/10.23668/PSYCHARCHIVES.4454
  • Author(s) / Creator(s)
    Koch, Timo
  • Author(s) / Creator(s)
    Schoedel, Ramona
  • PsychArchives acquisition timestamp
    2021-01-07T10:19:21Z
  • Made available on
    2021-01-07T10:19:21Z
  • Date of first publication
    2021-01-07
  • Abstract / Description
    The expression and recognition of emotions (i.e., short-lived and directed representations of affective states) through the acoustic properties of speech is a unique feature of human communication (Weninger et al., 2013). Researchers have identified acoustic features, which are predictable of affective states, and emotion detecting algorithms have been developed (Schuller, 2018). However, most studies used speech data produced by actors, who had instructions to act out a given emotion, or speech samples labelled by raters, who were instructed to add affective labels to recorded utterances (e.g., from TV shows). Both, enacted and labelled speech, come with multiple downsides since these approaches assess expressed affect rather than the experience of actual affective states through voice. In this work, we want to investigate if we can predict in-situ self-reported affective states from objective voice parameters collected with smartphones in everyday life. Further, we want to explore which acoustic features are most predictive for the prediction of the experience of affective states. Finally, we want to analyze how the affective quality of instructed spoken language (e.g., a sentence with negative affective valence) translates into objective markers in the acoustic signal, which then in turn could alter the predictions in our models.
    en
  • Publication status
    other
    en
  • Review status
    unknown
    en
  • Citation
    Koch, T., & Schoedel, R. (2021). Predicting Affective States from Acoustic Voice Cues Collected with Smartphones. PsychArchives. https://doi.org/10.23668/PSYCHARCHIVES.4454
    en
  • Persistent Identifier
    https://hdl.handle.net/20.500.12034/4033
  • Persistent Identifier
    https://doi.org/10.23668/psycharchives.4454
  • Language of content
    eng
  • Publisher
    PsychArchives
    en
  • Is related to
    https://doi.org/10.23668/psycharchives.2901
  • Dewey Decimal Classification number(s)
    150
  • Title
    Predicting Affective States from Acoustic Voice Cues Collected with Smartphones
    en
  • DRO type
    preregistration
    en
  • Visible tag(s)
    Smartphone Sensing Panel Study
    en