timestamp_ms int64 0 5.6k | conversation_id stringclasses 1
value | speaker_role stringclasses 2
values | conversation_state stringclasses 3
values | user_gaze_zone stringclasses 1
value | avatar_looking_at_user bool 2
classes | aversion_direction stringclasses 2
values | contact_duration_ms int64 400 3.2k | culture stringclasses 1
value | transcript_segment stringclasses 5
values |
|---|---|---|---|---|---|---|---|---|---|
0 | conv_001 | avatar | speaking | at_me | true | null | 1,200 | western | So the key insight from the research is... |
1,200 | conv_001 | avatar | speaking | at_me | false | up_right | 400 | western | ...that eye contact patterns vary significantly... |
1,600 | conv_001 | avatar | speaking | at_me | true | null | 800 | western | ...between speakers and listeners. |
2,400 | conv_001 | user | listening | at_me | true | null | 3,200 | western | That makes sense. I read that listeners maintain about 70% eye contact while speakers only hold around 40%. |
5,600 | conv_001 | avatar | thinking | at_me | false | down_left | 600 | western |
Thot Pocket Gaze Behavior Dataset
Training data for Thot Pocket -- an AI avatar eye contact intelligence system that generates naturalistic gaze behavior during conversations.
Main repo: github.com/ExpertVagabond/thot-pocket Crate: crates.io/crates/thot-pocket (v0.1.0)
The training pipeline is included in the GitHub repo under train/ — a GazeTransformer model (4-layer, 128-dim, 3 output heads).
Purpose
Human eye contact follows complex, culturally-informed patterns during conversation. Speakers and listeners avert gaze at different rates, in different directions, and for different durations depending on cognitive load, conversational role, and cultural norms. This dataset captures timestamped gaze annotations from real and simulated conversations to train AI avatars that exhibit believable eye contact behavior rather than the uncanny-valley stare of most virtual agents.
Schema
Each row in the JSONL data files contains:
| Field | Type | Description |
|---|---|---|
timestamp_ms |
int |
Milliseconds from conversation start |
conversation_id |
string |
Unique conversation identifier |
speaker_role |
string |
Who is producing the current speech segment: avatar or user |
conversation_state |
string |
Current avatar state: idle, listening, thinking, speaking |
user_gaze_zone |
string |
Where the user is looking: at_me, away, down |
avatar_looking_at_user |
bool |
Whether the avatar is making eye contact |
aversion_direction |
string | null |
If avatar is not looking at user, the aversion direction: up_right, up_left, right, left, down_right, down_left. Null when avatar_looking_at_user is true. |
contact_duration_ms |
int |
How long the current gaze state has been held (ms) |
culture |
string |
Cultural context label: western, east_asian, middle_eastern, latin_american, south_asian, african |
transcript_segment |
string |
The speech content at this timestamp |
Conversation States
- idle -- No active conversation; avatar is in resting gaze pattern
- listening -- User is speaking; avatar should maintain higher eye contact
- thinking -- Avatar is formulating a response; gaze aversion increases naturally
- speaking -- Avatar is delivering speech; gaze follows natural speaker patterns
Gaze Zone Labels
User gaze zones observed from the avatar's perspective:
- at_me -- User is looking at the avatar's face/eyes
- away -- User is looking to the side or at something else
- down -- User is looking downward (phone, notes, etc.)
Aversion Directions
When the avatar breaks eye contact, the direction of gaze shift:
- up_right -- Associated with visual construction/imagination
- up_left -- Associated with visual recall
- right -- Associated with auditory construction
- left -- Associated with auditory recall
- down_right -- Associated with kinesthetic processing
- down_left -- Associated with internal dialogue
Data Files
data/
train.jsonl # Training split
How to Load
from datasets import load_dataset
dataset = load_dataset("purplesquirrelnetworks/thot-pocket-gaze")
Contributing Data
We welcome contributions of annotated gaze behavior data. To contribute:
- Fork this repository
- Add your annotated data in JSONL format following the schema above
- Ensure each row includes all required fields
- Include a
culturelabel for cultural context - Open a pull request with a description of your data source and annotation methodology
Annotation Guidelines
- Timestamps should be relative to conversation start (first utterance = 0ms)
- Gaze zone labels should be annotated at a minimum of 100ms granularity
- Aversion direction must be null when
avatar_looking_at_useris true - Cultural labels should reflect the cultural background of the human participant
- Transcript segments should capture the speech content at the annotation timestamp
Data Sources We Accept
- Webcam-recorded conversations with manual gaze annotation
- Eye-tracker data from conversation studies
- Expert-annotated synthetic conversation scenarios
- Published research datasets re-formatted to this schema (with appropriate licensing)
License
MIT
Citation
@dataset{thot_pocket_gaze_2026,
title={Thot Pocket Gaze Behavior Dataset},
author={Purple Squirrel Networks},
year={2026},
url={https://huggingface.co/datasets/purplesquirrelnetworks/thot-pocket-gaze},
license={MIT}
}
- Downloads last month
- 31