File size: 1,375 Bytes
a6f3a7e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b18628
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Nebula: Code Generation through Natural Language Understanding

<p align="left">
    📑 <a href="https://huggingface.co/papers/yyyy.yyyyy" target="_blank">Paper</a> &nbsp&nbsp | &nbsp&nbsp 🌐 <a href="https://nebula-codegen.github.io/" target="_blank">Project Page</a> &nbsp&nbsp | &nbsp&nbsp 💾 <a href="https://huggingface.co/collections/toolevalxm/nebula-67b978e28fd926b56a4f55a3" target="_blank">Released Resources</a> &nbsp&nbsp | &nbsp&nbsp 📦 <a href="https://github.com/xmhtoolathlon/Nebula-ModelForge" target="_blank">Repo</a> 

We release the base training data for our Nebula code generation models, curated from the original CodeSearchNet corpus maintained by the GitHub research team.

The data format for each line in the `train_code_v3.jsonl` is as follows:

```
{
  "code_snippet": <the original code snippet>,
  "docstring": <the natural language documentation>,
  "language": <programming language identifier>,
  "func_name": <the function name>,
  "repo": <source repository name>,
  "path": <file path in original repository>,
  "metadata": <additional context information>
}
```

Some entries have truncated docstrings due to maximum length constraints during preprocessing.

*Note: We filtered samples based on code quality metrics. Future versions may include additional quality annotations.

**License**

The license for this dataset is MIT.