File size: 6,198 Bytes
d0b9025
fbf6c68
 
 
 
 
 
 
 
 
d0b9025
 
 
 
 
 
 
 
 
 
 
 
 
 
60dce2b
d0b9025
 
 
 
 
 
 
9d73b85
 
 
 
d0b9025
 
8d9f03e
 
d0b9025
 
5717b5e
d0b9025
 
 
 
 
 
 
 
 
8d9f03e
d0b9025
8d9f03e
 
d0b9025
 
 
 
 
 
96cff59
1482367
96cff59
 
 
 
 
 
 
 
e70d400
96cff59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fbf6c68
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- code
- python
arxiv: 2508.02455
dataset_info:
  features:
  - name: idx
    dtype: int64
  - name: idx_lca
    dtype: int64
  - name: offset
    dtype: int64
  - name: repo
    dtype: string
  - name: commit_hash
    dtype: string
  - name: target_file
    dtype: string
  - name: line_type_lca
    dtype: string
  - name: ground_truth
    dtype: string
  - name: in_completions
    dtype: bool
  - name: completion_type
    dtype: string
  - name: non_dunder_count_intellij
    dtype: int64
  - name: non_dunder_count_jedi
    dtype: int64
  - name: start_with_
    dtype: bool
  - name: first_occurrence
    dtype: bool
  - name: intellij_completions
    sequence: string
  - name: jedi_completions
    list:
    - name: name
      dtype: string
    - name: type
      dtype: string
  - name: prefix
    dtype: string
  splits:
  - name: train
    num_bytes: 76152400
    num_examples: 5531
  download_size: 8547476
  dataset_size: 76152400
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

<h1 align="center">๐Ÿง  LCA-Starting Points</h1>

<div align="center">

[![Paper](https://img.shields.io/badge/Paper-Arxiv%3A2508.02455-B31B1B?style=flat&logo=arxiv)](https://arxiv.org/abs/2508.02455)
[![License](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
[![Python](https://img.shields.io/badge/Language-Python-blue.svg)](https://www.python.org/)

**A benchmark for evaluating project-local code completion ranking.**
*Curated to validate [TreeRanker](https://arxiv.org/abs/2508.02455) (ASE2025).*

</div>

---

## ๐Ÿ“– Dataset Description

**Starting Points** is a specialized dataset designed to evaluate **code completion ranking**, with a specific focus on **locally defined identifiers** (project-specific APIs) in Python.

Most LLM benchmarks focus on global APIs (standard libraries). However, developers spend significant time using APIs defined within their own projects. **Starting Points** targets this "blind spot" by testing how well models can resolve and rank identifiers that are defined within the user's current repository but may not be visible in the immediate file context.

This dataset is a refined subset of the [Long Code Arena](https://huggingface.co/datasets/JetBrains-Research/long-code-arena), enriched with:
* **Static Analysis Data:** Valid completions resolved by the [Jedi](https://github.com/davidhalter/jedi) library.
* **Real-World IDE Suggestions:** Ranked candidate lists generated by **IntelliJ IDEA**.

### โšก Key Features
* **Focus:** Project-specific API completion (vs. standard library).
* **Language:** Python.
* **Source:** Large projects with rich user-defined classes and functions.
* **Goal:** Benchmark ranking algorithms for local development environments.

---

## ๐Ÿ“‚ Dataset Structure

### Data Instances
Each instance represents a specific cursor position in a Python file where a **dereference operation** (e.g., `object.`) occurs. The task is to predict the correct next identifier from a list of candidates.

### ๐Ÿ“Š Data Fields

This dataset includes rich metadata to facilitate deep analysis of ranking performance, surpassing the original details in the paper.

| Field | Type | Description |
| :--- | :--- | :--- |
| `idx` | `int64` | Unique identifier for the dataset entry. |
| `idx_lca` | `int64` | Original index of the file in the *Long Code Arena* benchmark. |
| `repo` | `string` | Name of the source repository. |
| `commit_hash` | `string` | Specific commit hash used for the snapshot. |
| `target_file` | `string` | Path to the file within the repository. |
| `offset` | `int64` | Character offset (cursor position) where completion is triggered. |
| `prefix` | `string` | Source code content preceding the cursor (the context). |
| `ground_truth` | `string` | The actual identifier the developer typed (target label). |
| `completion_type` | `string` | Metadata describing the completion scenario. |
| `start_with_` | `bool` | **True** if the ground truth starts with an underscore `_`. |
| `first_occurrence` | `bool` | **True** if the identifier has **not** appeared previously in the `prefix` (file context). Useful for evaluating "unseen" identifier performance. |
| `in_completions` | `bool` | **True** if `ground_truth` is present in the `intellij_completions` list. |
| `intellij_completions`| `sequence[string]` | Ranked list of candidates from **IntelliJ IDEA**'s completion engine. |
| `jedi_completions` | `list[struct]` | List of valid completions from **Jedi**, including `name` and `type`. |
| `non_dunder_count_intellij`| `int64` | Count of IntelliJ candidates excluding dunder methods (e.g., `__init__`). |
| `non_dunder_count_jedi` | `int64` | Count of Jedi candidates excluding dunder methods. |
| `line_type_lca` | `string` | Inherited metadata from Long Code Arena regarding line classification. |

---

## ๐Ÿ› ๏ธ Dataset Creation

### Curation Rationale
Standard benchmarks often overlook the difficulty of ranking identifiers that are **local to a specific project**. This dataset was created to test model performance in this realistic, everyday scenario where context from other files in the repository is crucial.

### Source Data
* **Origin:** [Long Code Arena](https://huggingface.co/datasets/JetBrains-Research/long-code-arena) benchmark.
* **Filtering:**
    * **Dereference Detection:** Uses `tree-sitter` to find `.` operators.
    * **Static Analysis:** Uses `Jedi` to resolve object types.
    * **Scope Constraint:** Retains only suggestions defined within the **same repository**, excluding standard library calls.
    * **Quality Control:** Requires the ground truth to be in the IntelliJ candidate list and the list to have at least 5 non-trivial items.

---

## ๐Ÿ“š Citation

If you use this dataset, please cite the original paper:

```bibtex
@article{cipollone2025treeranker,
  title={TreeRanker: Fast and Model-agnostic Ranking System for Code Suggestions in IDEs},
  author={Cipollone, Daniele and Bogomolov, Egor and van Deursen, Arie and Izadi, Maliheh},
  journal={arXiv preprint arXiv:2508.02455},
  year={2025}
}
```