Recognition: unknown
TurkerGaze: Crowdsourcing Saliency with Webcam based Eye Tracking
read the original abstract
Traditional eye tracking requires specialized hardware, which means collecting gaze data from many observers is expensive, tedious and slow. Therefore, existing saliency prediction datasets are order-of-magnitudes smaller than typical datasets for other vision recognition tasks. The small size of these datasets limits the potential for training data intensive algorithms, and causes overfitting in benchmark evaluation. To address this deficiency, this paper introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on Amazon Mechanical Turk (AMTurk). By a combination of careful algorithm and gaming protocol design, our system obtains eye tracking data for saliency prediction comparable to data gathered in a traditional lab setting, with relatively lower cost and less effort on the part of the researchers. Using this tool, we build a saliency dataset for a large number of natural images. We will open-source our tool and provide a web server where researchers can upload their images to get eye tracking results from AMTurk.
This paper has not been read by Pith yet.
Forward citations
Cited by 3 Pith papers
-
Component-Based Out-of-Distribution Detection
CoOD decomposes inputs into components and applies Component Shift Score plus Compositional Consistency Score to improve detection of both standard and compositional out-of-distribution data.
-
TTL: Test-time Textual Learning for OOD Detection with Pretrained Vision-Language Models
TTL dynamically learns OOD textual semantics from unlabeled test streams via prompt updates, purification, and a knowledge bank to improve detection performance in pretrained VLMs.
-
GazeCode: Recall-Based Verification for Higher-Quality In-the-Wild Mobile Gaze Data Collection
GazeCode uses multi-digit recall tasks with anti-peripheral stimulus design to strengthen label validity in unsupervised mobile gaze data collection.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.