|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- text-generation |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- code |
|
|
- python |
|
|
arxiv: 2508.02455 |
|
|
dataset_info: |
|
|
features: |
|
|
- name: idx |
|
|
dtype: int64 |
|
|
- name: idx_lca |
|
|
dtype: int64 |
|
|
- name: offset |
|
|
dtype: int64 |
|
|
- name: repo |
|
|
dtype: string |
|
|
- name: commit_hash |
|
|
dtype: string |
|
|
- name: target_file |
|
|
dtype: string |
|
|
- name: line_type_lca |
|
|
dtype: string |
|
|
- name: ground_truth |
|
|
dtype: string |
|
|
- name: in_completions |
|
|
dtype: bool |
|
|
- name: completion_type |
|
|
dtype: string |
|
|
- name: non_dunder_count_intellij |
|
|
dtype: int64 |
|
|
- name: non_dunder_count_jedi |
|
|
dtype: int64 |
|
|
- name: start_with_ |
|
|
dtype: bool |
|
|
- name: first_occurrence |
|
|
dtype: bool |
|
|
- name: intellij_completions |
|
|
sequence: string |
|
|
- name: jedi_completions |
|
|
list: |
|
|
- name: name |
|
|
dtype: string |
|
|
- name: type |
|
|
dtype: string |
|
|
- name: prefix |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 76152400 |
|
|
num_examples: 5531 |
|
|
download_size: 8547476 |
|
|
dataset_size: 76152400 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
|
|
|
<h1 align="center">๐ง LCA-Starting Points</h1> |
|
|
|
|
|
<div align="center"> |
|
|
|
|
|
[](https://arxiv.org/abs/2508.02455) |
|
|
[](https://opensource.org/licenses/MIT) |
|
|
[](https://www.python.org/) |
|
|
|
|
|
**A benchmark for evaluating project-local code completion ranking.** |
|
|
*Curated to validate [TreeRanker](https://arxiv.org/abs/2508.02455) (ASE2025).* |
|
|
|
|
|
</div> |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Dataset Description |
|
|
|
|
|
**Starting Points** is a specialized dataset designed to evaluate **code completion ranking**, with a specific focus on **locally defined identifiers** (project-specific APIs) in Python. |
|
|
|
|
|
Most LLM benchmarks focus on global APIs (standard libraries). However, developers spend significant time using APIs defined within their own projects. **Starting Points** targets this "blind spot" by testing how well models can resolve and rank identifiers that are defined within the user's current repository but may not be visible in the immediate file context. |
|
|
|
|
|
This dataset is a refined subset of the [Long Code Arena](https://huggingface.co/datasets/JetBrains-Research/long-code-arena), enriched with: |
|
|
* **Static Analysis Data:** Valid completions resolved by the [Jedi](https://github.com/davidhalter/jedi) library. |
|
|
* **Real-World IDE Suggestions:** Ranked candidate lists generated by **IntelliJ IDEA**. |
|
|
|
|
|
### โก Key Features |
|
|
* **Focus:** Project-specific API completion (vs. standard library). |
|
|
* **Language:** Python. |
|
|
* **Source:** Large projects with rich user-defined classes and functions. |
|
|
* **Goal:** Benchmark ranking algorithms for local development environments. |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Dataset Structure |
|
|
|
|
|
### Data Instances |
|
|
Each instance represents a specific cursor position in a Python file where a **dereference operation** (e.g., `object.`) occurs. The task is to predict the correct next identifier from a list of candidates. |
|
|
|
|
|
### ๐ Data Fields |
|
|
|
|
|
This dataset includes rich metadata to facilitate deep analysis of ranking performance, surpassing the original details in the paper. |
|
|
|
|
|
| Field | Type | Description | |
|
|
| :--- | :--- | :--- | |
|
|
| `idx` | `int64` | Unique identifier for the dataset entry. | |
|
|
| `idx_lca` | `int64` | Original index of the file in the *Long Code Arena* benchmark. | |
|
|
| `repo` | `string` | Name of the source repository. | |
|
|
| `commit_hash` | `string` | Specific commit hash used for the snapshot. | |
|
|
| `target_file` | `string` | Path to the file within the repository. | |
|
|
| `offset` | `int64` | Character offset (cursor position) where completion is triggered. | |
|
|
| `prefix` | `string` | Source code content preceding the cursor (the context). | |
|
|
| `ground_truth` | `string` | The actual identifier the developer typed (target label). | |
|
|
| `completion_type` | `string` | Metadata describing the completion scenario. | |
|
|
| `start_with_` | `bool` | **True** if the ground truth starts with an underscore `_`. | |
|
|
| `first_occurrence` | `bool` | **True** if the identifier has **not** appeared previously in the `prefix` (file context). Useful for evaluating "unseen" identifier performance. | |
|
|
| `in_completions` | `bool` | **True** if `ground_truth` is present in the `intellij_completions` list. | |
|
|
| `intellij_completions`| `sequence[string]` | Ranked list of candidates from **IntelliJ IDEA**'s completion engine. | |
|
|
| `jedi_completions` | `list[struct]` | List of valid completions from **Jedi**, including `name` and `type`. | |
|
|
| `non_dunder_count_intellij`| `int64` | Count of IntelliJ candidates excluding dunder methods (e.g., `__init__`). | |
|
|
| `non_dunder_count_jedi` | `int64` | Count of Jedi candidates excluding dunder methods. | |
|
|
| `line_type_lca` | `string` | Inherited metadata from Long Code Arena regarding line classification. | |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ ๏ธ Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
Standard benchmarks often overlook the difficulty of ranking identifiers that are **local to a specific project**. This dataset was created to test model performance in this realistic, everyday scenario where context from other files in the repository is crucial. |
|
|
|
|
|
### Source Data |
|
|
* **Origin:** [Long Code Arena](https://huggingface.co/datasets/JetBrains-Research/long-code-arena) benchmark. |
|
|
* **Filtering:** |
|
|
* **Dereference Detection:** Uses `tree-sitter` to find `.` operators. |
|
|
* **Static Analysis:** Uses `Jedi` to resolve object types. |
|
|
* **Scope Constraint:** Retains only suggestions defined within the **same repository**, excluding standard library calls. |
|
|
* **Quality Control:** Requires the ground truth to be in the IntelliJ candidate list and the list to have at least 5 non-trivial items. |
|
|
|
|
|
--- |
|
|
|
|
|
## ๐ Citation |
|
|
|
|
|
If you use this dataset, please cite the original paper: |
|
|
|
|
|
```bibtex |
|
|
@article{cipollone2025treeranker, |
|
|
title={TreeRanker: Fast and Model-agnostic Ranking System for Code Suggestions in IDEs}, |
|
|
author={Cipollone, Daniele and Bogomolov, Egor and van Deursen, Arie and Izadi, Maliheh}, |
|
|
journal={arXiv preprint arXiv:2508.02455}, |
|
|
year={2025} |
|
|
} |
|
|
``` |