Cybervisuals or the meaning of memes: multimodal perception, emotion and meaning-attribution to digital imagery
Author(s) / Creator(s)
Müller, Marion G.
Barth, Christof
Christ, Katharina
Abstract / Description
Recently, viral internet memes have become a hot topic in mass media research (e.g., Shifman, 2013, 2014; Gal et al., 2016; Marcus & Singer, 2017; Ross & Rivers, 2017; Nissenbaum & Shifman 2017, 2018; Babic & Volarevic, 2018; Lobinger et al., 2019). The scope of research ranges from entertaining to political memes on a global scale, mostly assessing the types and contents of memetic communication. This paper proposes a mixed-method approach combining an eyetracking experiment, and a self-report questionnaire with the intention of improving the understanding of perception, evaluation and meaning-attribution processes of internet memes. These new forms of communication and expression in a multimodal, yet predominatly visual online format have long left the merely interpersonal communication realm. Viral memes constitute a societally and politically relevant global communication format that is still understudied in terms of the meanings generated and attributed to them. Theoretically, this within-participants experiment builds on the Visual Communication Process Model (VCPM-Model) developed by Müller et al. (2012), focusing on the major visual communication processes from perception to meaning-attribution to emotional evaluation. Valence, emotion- and meaning-attribution are the key variables that are being tested in this experiment. Building on results from previous research on press photography, one key question is how valence, emotion and meaning are influenced by text and/or by visual elements of the meme stimuli. In an eyetracking experiment using TobiiPro3-Lab soft- and hardware, 30 participants (XF/XM) are viewing 20 text-visual experimental stimuli all downloaded from publicly accessible online sites in a randomized condition. Each stimulus has been manipulated to provide both a positive and a negative version by using a generic image software to modify the textual elements of each meme. Postive and negative versions are randomized, and equally distributed among the participants. Participants evaluate three aspects of visual communication: Whether the meme has a positive, negative or neutral meaning (valence), what kind of emotion is depicted, which emotional reaction is elicited by the stimulus (emotion-attribution) and what is the meaning association for each meme (meaning-attribution). While valence and emotion are being tested in the eyetracking experiment, meaning-attribution is being tested through the post-experimental survey.
Persistent Identifier
PsychArchives acquisition timestamp
2021-05-06 09:52:47 UTC
Citation
Müller, M. G., Barth, C., & Christ, K. (2021). Cybervisuals or the meaning of memes: multimodal perception, emotion and meaning-attribution to digital imagery. Leibniz Institut für Psychologische Information und Dokumentation (ZPID). https://doi.org/10.23668/PSYCHARCHIVES.4804
-
ZPiD-PräregistrierungCybervisuals_MüllerBarthChrist_080421.pdfAdobe PDF - 968.08KBMD5: 8b8e575c1b2e5fa056d42c8f4a96bb33
-
2_Einführung.pdfAdobe PDF - 252.9KBMD5: 7554945581d7659fcb4f5012eb337d3f
-
3_Fragebogen I.pdfAdobe PDF - 284.51KBMD5: 500d4752774f03ad6af701646d76431b
-
4_Ablauf Tobii Experiment.pdfAdobe PDF - 17.06MBMD5: c1c5ab62be688dc672b3cbe761f8114b
-
5_Fragebogen II.pdfAdobe PDF - 463.03KBMD5: faa0021207c3d2b848fbb303e26ab2c4
-
6_Debriefing.pdfAdobe PDF - 48.58KBMD5: 2c3ffdf38664eeb16d9c0f6e503b3c4d
-
A_Experimentalablauf.pdfAdobe PDF - 417.12KBMD5: a7bbc19d471496b5e0733b79c4a72b7e
-
22021-05-06During a review process we found that the experiment had been erroneously labeled as between-participants when in fact it was a within-participants experiment.
-
Author(s) / Creator(s)Müller, Marion G.
-
Author(s) / Creator(s)Barth, Christof
-
Author(s) / Creator(s)Christ, Katharina
-
PsychArchives acquisition timestamp2021-05-06T09:52:47Z
-
Made available on2019-11-26T12:32:51Z
-
Made available on2021-05-06T09:52:47Z
-
Date of first publication2021-05-06
-
Abstract / DescriptionRecently, viral internet memes have become a hot topic in mass media research (e.g., Shifman, 2013, 2014; Gal et al., 2016; Marcus & Singer, 2017; Ross & Rivers, 2017; Nissenbaum & Shifman 2017, 2018; Babic & Volarevic, 2018; Lobinger et al., 2019). The scope of research ranges from entertaining to political memes on a global scale, mostly assessing the types and contents of memetic communication. This paper proposes a mixed-method approach combining an eyetracking experiment, and a self-report questionnaire with the intention of improving the understanding of perception, evaluation and meaning-attribution processes of internet memes. These new forms of communication and expression in a multimodal, yet predominatly visual online format have long left the merely interpersonal communication realm. Viral memes constitute a societally and politically relevant global communication format that is still understudied in terms of the meanings generated and attributed to them. Theoretically, this within-participants experiment builds on the Visual Communication Process Model (VCPM-Model) developed by Müller et al. (2012), focusing on the major visual communication processes from perception to meaning-attribution to emotional evaluation. Valence, emotion- and meaning-attribution are the key variables that are being tested in this experiment. Building on results from previous research on press photography, one key question is how valence, emotion and meaning are influenced by text and/or by visual elements of the meme stimuli. In an eyetracking experiment using TobiiPro3-Lab soft- and hardware, 30 participants (XF/XM) are viewing 20 text-visual experimental stimuli all downloaded from publicly accessible online sites in a randomized condition. Each stimulus has been manipulated to provide both a positive and a negative version by using a generic image software to modify the textual elements of each meme. Postive and negative versions are randomized, and equally distributed among the participants. Participants evaluate three aspects of visual communication: Whether the meme has a positive, negative or neutral meaning (valence), what kind of emotion is depicted, which emotional reaction is elicited by the stimulus (emotion-attribution) and what is the meaning association for each meme (meaning-attribution). While valence and emotion are being tested in the eyetracking experiment, meaning-attribution is being tested through the post-experimental survey.en
-
Publication statusotheren
-
CitationMüller, M. G., Barth, C., & Christ, K. (2021). Cybervisuals or the meaning of memes: multimodal perception, emotion and meaning-attribution to digital imagery. Leibniz Institut für Psychologische Information und Dokumentation (ZPID). https://doi.org/10.23668/PSYCHARCHIVES.4804en
-
Persistent Identifierhttps://hdl.handle.net/20.500.12034/2268.2
-
Persistent Identifierhttps://doi.org/10.23668/psycharchives.4804
-
Language of contenteng
-
Is related tohttps://doi.org/10.23668/psycharchives.4702
-
Is related tohttps://doi.org/10.23668/psycharchives.4803
-
Dewey Decimal Classification number(s)150
-
TitleCybervisuals or the meaning of memes: multimodal perception, emotion and meaning-attribution to digital imageryen
-
DRO typepreregistrationen