Inferring target locations from gaze data: A smartphone study
This article is a preprint and has not been certified by peer review [What does this mean?].
Author(s) / Creator(s)
Mueller, Stefanie
Abstract / Description
Although smartphones are widely used in everyday life, studies of viewing behavior mainly employ desktop computers. This study examines whether closely spaced target locations on a smartphone can be decoded from gaze. Subjects wore a head-mounted eye tracker and fixated a target that successively appeared at 30 positions spaced by 10.0 x 9.0 mm. A ”hand-held” (phone in subject’s hand) and a ”mounted” (phone on surface) condition were conducted. Linear-mixed-models were fitted to examine whether gaze differed between targets. T-tests on root-mean-squared errors were calculated to evaluate the deviation between gaze and targets. To decode target positions from gaze data we trained a classifier and assessed its performance for every subject/condition. While gaze positions differed between targets (main effect ”target”), gaze deviated from the real positions. The classifier’s performance for the 30 locations ranged considerably between subjects (”mounted”: 30 to 93 % accuracy; ”hand-held”: 8 to 100 % accuracy).
Keyword(s)
fixations mobile devices accuracy gaze positionsPersistent Identifier
Date of first publication
2019
Is version of
Citation
Mueller, S. (2019). Inferring target locations from gaze data: A smartphone study. Leibniz Institut für Psychologische Information und Dokumentation (ZPID). https://doi.org/10.23668/psycharchives.2491
-
ETRA_Mueller_2019.pdfAdobe PDF - 568.85KBMD5: 1281ebca655025bee3a2e7d199f897fe
-
There are no other versions of this object.
-
Author(s) / Creator(s)Mueller, Stefanie
-
PsychArchives acquisition timestamp2019-06-19T06:14:18Z
-
Made available on2019-06-19T06:14:18Z
-
Date of first publication2019
-
Abstract / DescriptionAlthough smartphones are widely used in everyday life, studies of viewing behavior mainly employ desktop computers. This study examines whether closely spaced target locations on a smartphone can be decoded from gaze. Subjects wore a head-mounted eye tracker and fixated a target that successively appeared at 30 positions spaced by 10.0 x 9.0 mm. A ”hand-held” (phone in subject’s hand) and a ”mounted” (phone on surface) condition were conducted. Linear-mixed-models were fitted to examine whether gaze differed between targets. T-tests on root-mean-squared errors were calculated to evaluate the deviation between gaze and targets. To decode target positions from gaze data we trained a classifier and assessed its performance for every subject/condition. While gaze positions differed between targets (main effect ”target”), gaze deviated from the real positions. The classifier’s performance for the 30 locations ranged considerably between subjects (”mounted”: 30 to 93 % accuracy; ”hand-held”: 8 to 100 % accuracy).en_US
-
Publication statusacceptedVersion
-
Review statuspeerReviewed
-
CitationMueller, S. (2019). Inferring target locations from gaze data: A smartphone study. Leibniz Institut für Psychologische Information und Dokumentation (ZPID). https://doi.org/10.23668/psycharchives.2491en
-
Persistent Identifierhttps://hdl.handle.net/20.500.12034/2115
-
Persistent Identifierhttps://doi.org/10.23668/psycharchives.2491
-
Language of contentengen_US
-
Is version ofhttp://dx.doi.org/10.1145/3314111.3319847
-
Is related tohttps://doi.org/10.23668/psycharchives.2500
-
Is related tohttps://doi.org/10.1145/3314111.3319847
-
Keyword(s)fixationsen_US
-
Keyword(s)mobile devicesen_US
-
Keyword(s)accuracyen_US
-
Keyword(s)gaze positionsen_US
-
Dewey Decimal Classification number(s)150
-
TitleInferring target locations from gaze data: A smartphone studyen_US
-
DRO typepreprinten_US