CASEset

CASEset

Overview

CASEset is the dataset and training infrastructure for CASE (Context-Aware Screen-based Estimation of Gaze). CASEset captures synchronized webcam frames, desktop screenshots, and high-precision gaze labels to enable models that reason about what users are looking at on screen.

The Problem

High-accuracy gaze tracking currently requires expensive specialized hardware. Webcam-based approaches are cheaper but lack access to on-screen context, which limits accuracy for screen-targeted tasks.

The Key Insight

Existing large-scale gaze datasets capture appearance but not the screen content users view. CASEset pairs synchronized screen content with gaze to enable context-aware models that use visual saliency and UI structure.

Technical Architecture

Dataset Pipeline

The collection infrastructure synchronizes three streams:

  • Webcam frames (face/eye appearance)

  • Desktop screenshots (visual context)

  • High-precision gaze (Tobii Pro Fusion)

Key requirements:

  • Sub-50ms synchronization across modalities

  • Temporal interaction sequences during natural tasks

  • Diverse interface contexts (browsing, documents, apps)

FAZE-CCT Hybrid Model

A high-level pipeline:

  • Stage 1: FAZE DT-ED processes webcam frames to extract normalized gaze vectors.

  • Stage 2: Coordinate Translator maps gaze vectors to tentative screen coordinates.

  • Stage 3: CCT (Compact Convolutional Transformer) refines predictions using a 400×400 screenshot patch centered on the tentative location and optional recent click history.

Knowledge Distillation Approach

CASEset enables distillation where expensive hardware (Tobii) teaches webcam-only models to improve accuracy while supporting on-device, privacy-preserving inference.

Roadmap

Target milestones through Fall 2026; high-level phases include infrastructure & pilot data, full data collection & initial model, model refinement & optimization, and final evaluation & thesis completion.

Project Structure

Citation

If you use CASEset in your research, please cite:

License

[License information to be added]

Acknowledgments

S.U. Fall 2025 Undergraduate Research Showcase

Last updated