Please use this identifier to cite or link to this item: http://dx.doi.org/10.23668/psycharchives.4454
Full metadata record
DC FieldValueLanguage
dc.rights.licenseCC-BY 4.0-
dc.contributor.authorKoch, Timo-
dc.contributor.authorSchoedel, Ramona-
dc.date.accessioned2021-01-07T10:19:21Z-
dc.date.available2021-01-07T10:19:21Z-
dc.date.issued2021-01-07-
dc.identifier.citationKoch, T., & Schoedel, R. (2021). Predicting Affective States from Acoustic Voice Cues Collected with Smartphones. PsychArchives. https://doi.org/10.23668/PSYCHARCHIVES.4454en
dc.identifier.urihttps://hdl.handle.net/20.500.12034/4033-
dc.identifier.urihttp://dx.doi.org/10.23668/psycharchives.4454-
dc.description.abstractThe expression and recognition of emotions (i.e., short-lived and directed representations of affective states) through the acoustic properties of speech is a unique feature of human communication (Weninger et al., 2013). Researchers have identified acoustic features, which are predictable of affective states, and emotion detecting algorithms have been developed (Schuller, 2018). However, most studies used speech data produced by actors, who had instructions to act out a given emotion, or speech samples labelled by raters, who were instructed to add affective labels to recorded utterances (e.g., from TV shows). Both, enacted and labelled speech, come with multiple downsides since these approaches assess expressed affect rather than the experience of actual affective states through voice. In this work, we want to investigate if we can predict in-situ self-reported affective states from objective voice parameters collected with smartphones in everyday life. Further, we want to explore which acoustic features are most predictive for the prediction of the experience of affective states. Finally, we want to analyze how the affective quality of instructed spoken language (e.g., a sentence with negative affective valence) translates into objective markers in the acoustic signal, which then in turn could alter the predictions in our models.en
dc.language.isoeng-
dc.publisherPsychArchivesen
dc.relation.urihttp://dx.doi.org/10.23668/psycharchives.2901-
dc.rightsopenAccessen
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/-
dc.subject.ddc150-
dc.titlePredicting Affective States from Acoustic Voice Cues Collected with Smartphonesen
dc.typepreregistrationen
dc.description.reviewunknownen
dc.description.pubstatusotheren
Appears in Collections:Preregistration

Files in This Item:
File SizeFormat 
Preregistration_Affective States_Voice.pdf119,63 kBAdobe PDF Preview PDF Download


This item is licensed under a Creative Commons License Creative Commons