AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents
AndroidLens is a challenging benchmark for mobile GUI agents, featuring 571 real-world, long-horizon tasks in both Chinese and English, with an average of 26.1 steps per task. It supports evaluation of critical capabilities:
- Long-horizon planning under multi-constraint & multi-goal scenarios
- 298 cross-app tasks and 273 single-app tasks, covering 74 real-world applications (e.g., WeChat, Google Drive, Taobao, Maps).
- Robustness to real anomalies: ads, permission pop-ups, login redirects
- Multi-trajectory ground truth to reduce path bias
- Milestone-based progress tracking via nested sub-targets
This dataset is designed for both static (step-wise prediction) and dynamic (real-device execution) evaluation.
π Paper: AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents
πΎ GitHub: https://github.com/alibaba/AndroidLens
π€ Hugging Face: yuecao0119/AndroidLens
ποΈ Dataset Structure
Your data is organized as:
test/
ββ en/ # English tasks
β ββ <episode_id>/
β ββ <episode_id>.json # Full episode trajectory (list of steps)
β ββ <episode_id>_0.png # Screenshot at step 0
β ββ <episode_id>_1.png
β ββ ...
ββ zh/ # Chinese tasks
ββ <episode_id>/
ββ <episode_id>.json
ββ <episode_id>_0.png
ββ ...
Each <episode_id>.json file contains a list of step objects, with one object per interaction step.
π·οΈ Task Category Codes (types)
The types field uses a hierarchical two-digit code system to classify task complexity and structure.
These categories align with AndroidLensβs taxonomy of Multi-goal (1-X), Multi-constraint (2-X), and Domain-specific (3-X) tasks, enabling fine-grained analysis of agent performance across different challenge dimensions.
| Code | Category | Description |
|---|---|---|
| 1-1 | Single-app Unrelated Tasks | Multiple independent subtasks within one app, with no logical dependency. |
| 1-2 | Single-app Related Tasks | Multiple dependent subtasks within one app (e.g., βSearch product β add to cart β checkoutβ) |
| 1-3 | Cross-app Unrelated Tasks | Independent actions across multiple apps (e.g., βSend message on WeChat, then play music on QQ Musicβ) |
| 1-4 | Cross-app Related Tasks | Interdependent operations across multiple apps (e.g., copy link from Chrome β paste in Drive β upload) |
| 2-1 | Operation-Level Constraints | Tasks requiring precise widget-level control, such as exact text input, time/date pickers, or multi-condition filtering |
| 2-2 | Page-Level Constraints | Tasks with navigation constraints like specific tab/category selection, sort/filter order, or view state |
| 3-1 | Batch Operation Tasks | Repeated actions on multiple items (e.g., βEmpty shopping carts on Taobao, JD, and Pinduoduoβ) |
| 3-2 | Combine with VLM Capabilities | Tasks that leverage the agentβs built-in multimodal abilities, such as translation, comparison, summarization, or OCR-to-action |
π Step-level Data Format
Each step in the JSON list includes:
| Field | Type | Description |
|---|---|---|
episode_id |
str |
Unique task ID (UUID) |
language |
str |
"en" or "zh" |
app |
List[str] |
Sequence of involved apps (e.g., ["Google Chrome", "Google Drive"]) |
episode_length |
int |
Total steps in the full trajectory |
step_id |
int |
Current step index (0-based) |
instruction |
str |
High-level user goal (same for all steps in the episode) |
image_path |
str |
Relative path to screenshot (e.g., en/.../0.png) |
image_width, image_height |
int |
Original resolution of the screenshot |
result_action_type |
List[int] |
Action code ([4] = Click; see note below) |
result_touch_yx |
List[str] |
Normalized touch coordinates as string: "[y, x]" in range [0, 1] |
result_lift_yx |
List[str] |
End point for swipe (same as touch for tap) |
result_action_text |
List[str] |
Text to input (empty if none) |
duration |
List[null/float] |
Action hold time (null for tap) |
low_instruction |
str |
Step-specific guidance (for low-level evaluation) |
milestone |
dict |
Nested sub-goal info (see below) |
types |
List[str] |
Task category code (e.g., "1-2"); see full mapping below. |
π Coordinate Note:
result_touch_yxuses relative coordinates in[0, 1], with format"[y, x]"(note: y first).- To convert to absolute pixel:
y_abs = float(y_rel) * image_height x_abs = float(x_rel) * image_width
π― Milestone Format
The milestone field enables fine-grained progress evaluation:
{
"sub-target": "Open Google Chrome and search for 'panda'",
"idx": 1,
"bbox": [0.023, 0.121, 0.976, 0.189], // [x1, y1, x2, y2] in normalized coords
"text": "panda",
"state": ["selected"]
}
idx: milestone index (ordered)bbox: bounding box of key UI element (normalized, xy format)text/state: expected content or widget state
Milestones support ordered and unordered sub-goals for complex tasks.
π Action Type Mapping
Although the original AgentCPM-GUI defines actions via names, your data uses numeric codes in result_action_type. Based on AndroidLens annotation practice, the common mapping is:
| Code | Action | Required Fields |
|---|---|---|
| 1 | Wait | duration |
| 3 | Type | result_action_text |
| 4 | Click | result_touch_yx |
| 4 | LongPress | result_touch_yx, duration |
| 4 | Swipe | result_touch_yx, result_lift_yx |
| 5 | PressBack | β |
| 6 | PressHome | β |
| 10 | Terminate | β |
Confirm exact mapping from your annotation code if needed. AndroidLens uses ADB-based actions with explicit start/end for swipe.
π License
AndroidLens is released under the Apache-2.0 License.
Screenshots are derived from real app usage for research purposes only. Comply with app store policies and local regulations.
βοΈ Citation
If this work is helpful for your research, please consider citing the following BibTeX entry.
@article{cao2025androidlens,
title={AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents},
author={Yue Cao and Yingyao Wang and Pi Bu and Jingxuan Xing and Wei Jiang and Zekun Zhu and Junpeng Ma and Sashuai Zhou and Tong Lu and Jun Song and Yu Cheng and Yuning Jiang and Bo Zheng},
year={2025},
journal={arXiv preprint arXiv:2512.21302},
}
- Downloads last month
- 5