SajilAwale's picture
Update README.md
e473c38 verified
metadata
license: cc-by-4.0
task_categories:
  - text-retrieval
tags:
  - code
  - nasa
  - science
  - retrieval-benchmark
configs:
  - config_name: programming_language
    data_files:
      - split: test
        path: qrels/programming_language/*.tsv
  - config_name: query_type
    data_files:
      - split: test
        path: qrels/query_type/*.tsv
  - config_name: division
    data_files:
      - split: test
        path: qrels/division/*.tsv

NASA Code Retrieval Benchmark v0.1.1

This repository is an updated version of the NASA Code Retrieval Benchmark. It provides a code retrieval benchmark based on code from 7 programming languages sourced from NASA's GitHub repositories.

What's New in v0.1.1?

v0.1.1 introduces a hierarchical structure and official Hugging Face dataset configurations. This allows you to evaluate models specifically by language or by query category without data redundancy in the file system.

Licensing and Intellectual Property

This dataset is released under CC-BY-4.0 and contains only structured metadata and annotations produced by the dataset authors. It does not redistribute original source code from the indexed repositories.

The corpus.jsonl file contains placeholders for original code content rather than the source code itself. This is by design to respect the intellectual property and licensing terms of individual repository owners.

Users who wish to populate the corpus for research purposes may do so by fetching content directly from the source repositories using the replication scripts provided in the companion repository:

👉 NASA-IMPACT/github-code-discovery — see scripts/code_snippet/

Please ensure you comply with the licensing terms of each individual repository when using the fetched content.

Related Resources


Dataset Structure

The ground-truth relationships (qrels) are organized into two primary configurations:

nasa-science-code-benchmark-v0.1.1/
├── corpus.jsonl
├── queries.jsonl
└── qrels/
    ├── division/                 # Evaluation by NASA Science Division
    │   ├── astrophysics_division.tsv
    │   ├── biological_and_physical_sciences_division.tsv
    │   ├── earth_science_division.tsv
    │   ├── heliophysics_division.tsv
    │   ├── planetary_science_division.tsv
    │   └── not_a_nasa_division.tsv
    ├── programming_language/     # Evaluation by language
    │   ├── c++.tsv
    │   ├── c.tsv
    │   ├── fortran.tsv
    │   ├── java.tsv
    │   ├── javascript.tsv
    │   ├── matlab.tsv
    │   └── python.tsv
    └── query_type/               # Evaluation by query intent
        ├── nasa_science_class_code_docstring_heldout.tsv
        ├── nasa_science_class_code_identifier_heldout.tsv
        ├── nasa_science_function_code_docstring_heldout.tsv
        └── nasa_science_function_code_identifier_heldout.tsv

Data Fields

  • corpus.jsonl: A collection of all unique code snippets (functions and classes) from all languages.

    • _id: A unique string identifier for the code snippet.
    • text: ⚠️ Placeholder — use replication scripts to populate with original source code.
  • queries.jsonl: A collection of all unique queries (docstrings and identifiers).

    • _id: A unique string identifier for the query.
    • text: The natural language query.
  • qrels/: Tab-Separated Values (TSV) files mapping queries to relevant code snippets. Format: query-id corpus-id score.

How to Use

1. Load by Programming Language

Use this to evaluate model performance on specific languages.

from datasets import load_dataset

ds = load_dataset(
    "nasa-impact/nasa-science-code-benchmark-v0.1.1", 
    name="programming_language"
)

2. Load by Query Type

Use this to evaluate performance based on the nature of the query (e.g., nasa_science_class_code_docstring_heldout, nasa_science_class_code_identifier_heldout).

from datasets import load_dataset

ds = load_dataset(
    "nasa-impact/nasa-science-code-benchmark-v0.1.1", 
    name="query_type"
)

3. Load by Division

Use this to evaluate performance based on the nasa division (e.g., astrophysics_division, biological_and_physical_sciences_division).

from datasets import load_dataset

ds = load_dataset(
    "nasa-impact/nasa-science-code-benchmark-v0.1.1", 
    name="division"
)

Evaluation Categories

Programming Languages

File Language
python.tsv Python
c.tsv C
c++.tsv C++
java.tsv Java
javascript.tsv JavaScript
fortran.tsv Fortran
matlab.tsv Matlab

Query Types

File Description
nasa_science_function_code_docstring_heldout.tsv Query is a function's documentation/comment.
nasa_science_function_code_identifier_heldout.tsv Query is the specific function name.
nasa_science_class_code_docstring_heldout.tsv Query is a class's documentation/comment.
nasa_science_class_code_identifier_heldout.tsv Query is the specific class name.