melihcatal's picture
Upload dataset
9874e59 verified
metadata
dataset_info:
  features:
    - name: task_id
      dtype: string
    - name: repo
      dtype: string
    - name: file_path
      dtype: string
    - name: function_name
      dtype: string
    - name: qualified_name
      dtype: string
    - name: function_type
      dtype: string
    - name: class_name
      dtype: string
    - name: prompt
      dtype: string
    - name: signature
      dtype: string
    - name: docstring
      dtype: string
    - name: canonical_solution
      dtype: string
    - name: full_function
      dtype: string
    - name: tests
      dtype: string
    - name: setup
      dtype: string
    - name: metadata
      dtype: string
    - name: validation
      dtype: string
    - name: original_task_id
      dtype: string
    - name: full_context
      dtype: string
  splits:
    - name: train
      num_bytes: 9279426
      num_examples: 55
  download_size: 1911300
  dataset_size: 9279426
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

CodeDP Repo-Patch Benchmark (CPT-friendly)

Repository-level code completion benchmark for evaluating continual pre-training (CPT) models. 55 tasks from 12 real-world repositories, each requiring the model to generate a function body given file-level context.

Prompt Format

The prompt field contains the file context up to the target function's signature and docstring (truncated at the original # TODO: Implement this function marker). This format is directly usable by base/completion models — the model simply continues generating the function body.

The full_context field preserves the original full-file prompt (including code after the target function) for reference or fill-in-the-middle approaches.

Base/Completion Models

from datasets import load_dataset
ds = load_dataset("melihcatal/codedp-bench-repo-patch-cpt", split="train")

# prompt is ready for completion — ends with signature + docstring
prompt = ds[0]["prompt"]
# Model generates the function body from here

Instruction Models

For chat/instruction models, wrap the prompt in a chat template:

msg = f"Complete the implementation of `{ds[0]['function_name']}`. Return ONLY the function body.\n\n```python\n{ds[0]['prompt']}\n```"

Fields

Field Description
prompt File context up to function signature + docstring (CPT-ready)
full_context Full file with # TODO marker and downstream code
canonical_solution Reference function body
signature Function signature
docstring Function docstring (empty for 22/55 tasks)
function_name Target function name
class_name Enclosing class (if method, else null)
tests JSON list of pytest test cases
setup JSON with repo URL, install command, commit SHA
full_function Complete function (signature + docstring + body)
metadata JSON with body_lines, file_lines, has_docstring, num_tests
validation Test validation status

Statistics

  • 55 tasks from 12 repositories
  • 13 class methods, 42 standalone functions
  • 33 with docstrings, 22 without
  • Prompt lengths (after truncation): median ~2,800 chars (vs ~6,300 before)
  • Reference body lengths: median 437 chars

Metrics

Reference-based metrics (no repo setup needed):

  • BLEU-4: Token-level BLEU score
  • CodeBLEU: Syntax-aware code similarity
  • Edit Similarity: 1 - normalized Levenshtein distance
  • Exact Match: Normalized whitespace comparison

Evaluation

python -m evaluation.utility.run_repo_patch \
    --model_path ./output/model/checkpoint-final \
    --benchmark_path melihcatal/codedp-bench-repo-patch-cpt \
    --output_dir results/repo_patch/model/variant \
    --devices auto --batch_size 4

# For instruction models, add --chat_template