File size: 6,183 Bytes
9b62ca6 3d3bf63 6966282 93218be 6966282 93218be 2fe0a5c 9b62ca6 93218be 6966282 93218be 6966282 9b62ca6 93218be 9b62ca6 93218be 6966282 9b62ca6 6966282 9b62ca6 6966282 9b62ca6 93218be 9b62ca6 6966282 9b62ca6 93218be 9b62ca6 93218be 9b62ca6 6966282 93218be 6966282 93218be 6966282 93218be 6966282 93218be 6966282 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 | ---
license: mit
language:
- en
tags:
- code
---
# MultiLang Code Parser Dataset (MLCPD)
## Dataset Description
The MultiLang Code Parser Dataset (MLCPD) is a comprehensive multi-language code dataset designed to benchmark language-agnostic AI code parsers. It currently offers a filtered version of the StarCoder dataset, parsed with language-specific parsers, with future plans to unify outputs into a standard JSON format for complete AST representation.
### Key Features
- **Cleaned and Filtered Code**: Samples have been processed to remove outliers in terms of line length and code size
- **Quality Metrics**: Each sample includes metadata about average line length and line count of code along with AST node count and error count
- **Multi-language Support**: 10 programming languages represented in separate subsets
- **Consistent Format**: All samples follow the same Parquet structure for easy processing
### Dataset Size
The complete dataset is approximately 35GB in size. Individual language files vary in size, with the largest being C++ (5.85GB) and the smallest being Ruby (1.71GB).
### Dataset Statistics
| Language | Sample Count | Avg. Line Length | Avg. Line Count |
|------------|--------------|------------------|-----------------|
| C | 700,821 | 28.08 | 61.76 |
| C++ | 707,641 | 28.16 | 87.88 |
| C# | 705,203 | 29.53 | 44.26 |
| Go | 700,331 | 25.18 | 68.22 |
| Java | 711,922 | 30.85 | 54.40 |
| JavaScript | 687,775 | 27.69 | 44.15 |
| Python | 706,126 | 32.67 | 54.70 |
| Ruby | 703,473 | 27.35 | 27.41 |
| Scala | 702,833 | 35.30 | 44.38 |
| TypeScript | 695,597 | 29.18 | 36.89 |
## Dataset Structure
The dataset is organized with separate Parquet files for each programming language:
- `c_parsed_1.parquet` ... `c_parsed_4.parquet` - C language samples
- `cpp_parsed_1.parquet` ... `cpp_parsed_4.parquet` - C++ language samples
- `c_sharp_parsed_1.parquet` ... `c_sharp_parsed_4.parquet` - C# language samples
- `go_parsed_1.parquet` ... `go_parsed_4.parquet` - Go language samples
- `java_parsed_1.parquet` ... `java_parsed_4.parquet` - Java language samples
- `javascript_parsed_1.parquet` ... `javascript_parsed_4.parquet` - JavaScript language samples
- `python_parsed_1.parquet` ... `python_parsed_4.parquet` - Python language samples
- `ruby_parsed_1.parquet` ... `ruby_parsed_4.parquet` - Ruby language samples
- `scala_parsed_1.parquet` ... `scala_parsed_4.parquet` - Scala language samples
- `typescript_parsed_1.parquet` ... `typescript_parsed_4.parquet` - TypeScript language samples
Within each file, data is stored with the following schema:
```
- language: string (the programming language of the code sample)
- code: string (the complete code content)
- avg_line_length: float (average character count per line)
- line_count: integer (total number of lines in the code)
- lang_specific_parse: string (tree-sitter parsed output of the code sample)
- ast_node_count: integer (total number of nodes in the AST)
- num_errors: integer (total number of errors in the code)
```
Each sample is stored as a row in the Parquet file with these four columns.
## How to Access the Dataset
### Using the Hugging Face `datasets` Library
This dataset is hosted on the Hugging Face Hub and can be easily accessed using the `datasets` library.
#### Install the Required Library
```bash
pip install datasets
```
#### Import Library
```python
from datasets import load_dataset
```
#### Load the Entire Dataset
```python
dataset = load_dataset(
"jugalgajjar/MultiLang-Code-Parser-Dataset"
)
```
#### Load a Specific Language
```python
dataset = load_dataset(
"jugalgajjar/MultiLang-Code-Parser-Dataset",
data_files="python_parsed_1.parquet"
)
```
#### Stream Data
```python
dataset = load_dataset(
"jugalgajjar/MultiLang-Code-Parser-Dataset",
data_files="python_parsed_1.parquet",
streaming=True
)
```
#### Access Data Content (After Downloading)
```python
try:
for example in dataset["train"].take(5):
print(example)
print("-"*25)
except Exception as e:
print(f"An error occurred: {e}")
```
### Manual Download
You can also manually download specific language files from the Hugging Face repository page:
1. Visit `https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset`
2. Navigate to the "Files" tab
3. Click on the language file you want to download (e.g., `python_parsed_1.parquet`)
4. Use the download button to save the file locally
## Dataset Creation
This dataset was created through the following process:
1. Original code samples were collected from the StarCoder dataset ([URL](https://huggingface.co/datasets/bigcode/starcoderdata))
2. Statistical analysis was performed to identify quality metrics
3. Outliers were removed using IQR (Interquartile Range) method
4. Samples were filtered to remove excessively long or short code examples
5. Data was normalized and standardized across languages
6. Metadata (average line length and line count) was calculated for each sample
7. Data was serialized in the efficient Parquet format for optimal storage and access speed
8. Code samples from each language were parsed using language-specific tree-sitter parsers
9. Metadata (AST node count and number of errors in the code) were recorded for each sample
10. Final data was split into four files and stored in the Parquet format
## Citation
If you use this dataset in your research or project, please cite it as follows:
```bibtex
@misc{fscdmini2025,
author = {Jugal Gajjar, Kamalasankari Subramaniakuppusamy, Kaustik Ranaware},
title = {Filtered CodeStar Dataset Mini},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/datasets/jugalgajjar/MultiLang-Code-Parser-Dataset}}
}
```
## License
This dataset is released under the MIT License. See the LICENSE file for more details. |