|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- visual-question-answering |
|
|
- text-generation |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
tags: |
|
|
- android |
|
|
- gui grounding |
|
|
- gui agent |
|
|
- english app |
|
|
- chinese app |
|
|
- long-horizon-planning |
|
|
pretty_name: AndroidLens |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
viewer: false |
|
|
--- |
|
|
|
|
|
|
|
|
# AndroidLens: **Long-latency Evaluation with Nested Sub-targets for Android GUI Agents** |
|
|
|
|
|
AndroidLens is a challenging benchmark for mobile GUI agents, featuring **571 real-world, long-horizon tasks** in both **Chinese and English**, with an average of **26.1 steps per task**. It supports evaluation of critical capabilities: |
|
|
|
|
|
- **Long-horizon planning** under multi-constraint & multi-goal scenarios |
|
|
- **298 cross-app tasks and 273 single-app tasks**, covering 74 real-world applications (e.g., WeChat, Google Drive, Taobao, Maps). |
|
|
- **Robustness to real anomalies**: ads, permission pop-ups, login redirects |
|
|
- **Multi-trajectory ground truth** to reduce path bias |
|
|
- **Milestone-based progress tracking** via nested sub-targets |
|
|
|
|
|
This dataset is designed for both **static** (step-wise prediction) and **dynamic** (real-device execution) evaluation. |
|
|
|
|
|
> 📄 **Paper**: [AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents](http://arxiv.org/abs/2512.21302) |
|
|
> 💾 **GitHub**: [https://github.com/alibaba/AndroidLens](https://github.com/alibaba/AndroidLens) |
|
|
> 🤗 **Hugging Face**: [yuecao0119/AndroidLens](https://huggingface.co/datasets/yuecao0119/AndroidLens) |
|
|
|
|
|
--- |
|
|
|
|
|
## 🗂️ Dataset Structure |
|
|
|
|
|
Your data is organized as: |
|
|
|
|
|
``` |
|
|
test/ |
|
|
├─ en/ # English tasks |
|
|
│ └─ <episode_id>/ |
|
|
│ ├─ <episode_id>.json # Full episode trajectory (list of steps) |
|
|
│ ├─ <episode_id>_0.png # Screenshot at step 0 |
|
|
│ ├─ <episode_id>_1.png |
|
|
│ └─ ... |
|
|
└─ zh/ # Chinese tasks |
|
|
└─ <episode_id>/ |
|
|
├─ <episode_id>.json |
|
|
├─ <episode_id>_0.png |
|
|
└─ ... |
|
|
``` |
|
|
|
|
|
Each `<episode_id>.json` file contains a **list of step objects**, with one object per interaction step. |
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### 🏷️ Task Category Codes (`types`) |
|
|
|
|
|
The `types` field uses a hierarchical two-digit code system to classify task complexity and structure. |
|
|
These categories align with AndroidLens’s taxonomy of **Multi-goal (1-X)**, **Multi-constraint (2-X)**, and **Domain-specific (3-X)** tasks, enabling fine-grained analysis of agent performance across different challenge dimensions. |
|
|
|
|
|
| Code | Category | Description | |
|
|
|------|----------|-------------| |
|
|
| **1-1** | Single-app Unrelated Tasks | Multiple independent subtasks within **one app**, with no logical dependency. | |
|
|
| **1-2** | Single-app Related Tasks | Multiple **dependent** subtasks within **one app** (e.g., “Search product → add to cart → checkout”) | |
|
|
| **1-3** | Cross-app Unrelated Tasks | Independent actions across **multiple apps** (e.g., “Send message on WeChat, then play music on QQ Music”) | |
|
|
| **1-4** | Cross-app Related Tasks | **Interdependent** operations across **multiple apps** (e.g., copy link from Chrome → paste in Drive → upload) | |
|
|
| **2-1** | Operation-Level Constraints | Tasks requiring precise **widget-level control**, such as exact text input, time/date pickers, or multi-condition filtering | |
|
|
| **2-2** | Page-Level Constraints | Tasks with navigation constraints like specific tab/category selection, sort/filter order, or view state | |
|
|
| **3-1** | Batch Operation Tasks | Repeated actions on multiple items (e.g., “Empty shopping carts on Taobao, JD, and Pinduoduo”) | |
|
|
| **3-2** | Combine with VLM Capabilities | Tasks that **leverage the agent’s built-in multimodal abilities**, such as translation, comparison, summarization, or OCR-to-action | |
|
|
|
|
|
--- |
|
|
|
|
|
## 📑 Step-level Data Format |
|
|
|
|
|
Each step in the JSON list includes: |
|
|
|
|
|
| Field | Type | Description | |
|
|
|------|------|-------------| |
|
|
| `episode_id` | `str` | Unique task ID (UUID) | |
|
|
| `language` | `str` | `"en"` or `"zh"` | |
|
|
| `app` | `List[str]` | Sequence of involved apps (e.g., `["Google Chrome", "Google Drive"]`) | |
|
|
| `episode_length` | `int` | Total steps in the full trajectory | |
|
|
| `step_id` | `int` | Current step index (0-based) | |
|
|
| `instruction` | `str` | High-level user goal (same for all steps in the episode) | |
|
|
| `image_path` | `str` | Relative path to screenshot (e.g., `en/.../0.png`) | |
|
|
| `image_width`, `image_height` | `int` | Original resolution of the screenshot | |
|
|
| `result_action_type` | `List[int]` | Action code (`[4]` = Click; see note below) | |
|
|
| `result_touch_yx` | `List[str]` | **Normalized** touch coordinates as string: `"[y, x]"` in range `[0, 1]` | |
|
|
| `result_lift_yx` | `List[str]` | End point for swipe (same as `touch` for tap) | |
|
|
| `result_action_text` | `List[str]` | Text to input (empty if none) | |
|
|
| `duration` | `List[null/float]` | Action hold time (null for tap) | |
|
|
| `low_instruction` | `str` | Step-specific guidance (for low-level evaluation) | |
|
|
| `milestone` | `dict` | **Nested sub-goal** info (see below) | |
|
|
| `types` | `List[str]` | Task category code (e.g., `"1-2"`); see full mapping below. | |
|
|
|
|
|
> 🔍 **Coordinate Note**: |
|
|
> - `result_touch_yx` uses **relative coordinates** in `[0, 1]`, with format `"[y, x]"` (note: *y first*). |
|
|
> - To convert to absolute pixel: |
|
|
> ```python |
|
|
> y_abs = float(y_rel) * image_height |
|
|
> x_abs = float(x_rel) * image_width |
|
|
> ``` |
|
|
|
|
|
--- |
|
|
|
|
|
## 🎯 Milestone Format |
|
|
|
|
|
The `milestone` field enables **fine-grained progress evaluation**: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"sub-target": "Open Google Chrome and search for 'panda'", |
|
|
"idx": 1, |
|
|
"bbox": [0.023, 0.121, 0.976, 0.189], // [x1, y1, x2, y2] in normalized coords |
|
|
"text": "panda", |
|
|
"state": ["selected"] |
|
|
} |
|
|
``` |
|
|
|
|
|
- `idx`: milestone index (ordered) |
|
|
- `bbox`: bounding box of key UI element (normalized, xy format) |
|
|
- `text` / `state`: expected content or widget state |
|
|
|
|
|
Milestones support **ordered** and **unordered** sub-goals for complex tasks. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📊 Action Type Mapping |
|
|
|
|
|
Although the original [AgentCPM-GUI](https://github.com/OpenBMB/AgentCPM-GUI) defines actions via names, your data uses numeric codes in `result_action_type`. Based on AndroidLens annotation practice, the common mapping is: |
|
|
|
|
|
| Code | Action | Required Fields | |
|
|
|------|------------------|-------------------------------------| |
|
|
| 1 | Wait | `duration` | |
|
|
| 3 | Type | `result_action_text` | |
|
|
| 4 | Click | `result_touch_yx` | |
|
|
| 4 | LongPress | `result_touch_yx`, `duration` | |
|
|
| 4 | Swipe | `result_touch_yx`, `result_lift_yx` | |
|
|
| 5 | PressBack | — | |
|
|
| 6 | PressHome | — | |
|
|
| 10 | Terminate | — | |
|
|
|
|
|
> Confirm exact mapping from your annotation code if needed. AndroidLens uses ADB-based actions with explicit start/end for swipe. |
|
|
|
|
|
--- |
|
|
|
|
|
## 📜 License |
|
|
|
|
|
AndroidLens is released under the **Apache-2.0 License**. |
|
|
Screenshots are derived from real app usage for research purposes only. Comply with app store policies and local regulations. |
|
|
|
|
|
--- |
|
|
|
|
|
## ✏️ Citation |
|
|
|
|
|
If this work is helpful for your research, please consider citing the following BibTeX entry. |
|
|
|
|
|
```bibtex |
|
|
@article{cao2025androidlens, |
|
|
title={AndroidLens: Long-latency Evaluation with Nested Sub-targets for Android GUI Agents}, |
|
|
author={Yue Cao and Yingyao Wang and Pi Bu and Jingxuan Xing and Wei Jiang and Zekun Zhu and Junpeng Ma and Sashuai Zhou and Tong Lu and Jun Song and Yu Cheng and Yuning Jiang and Bo Zheng}, |
|
|
year={2025}, |
|
|
journal={arXiv preprint arXiv:2512.21302}, |
|
|
} |
|
|
``` |
|
|
|
|
|
|