Datasets:
metadata
dataset_info:
features:
- name: nwo
dtype: string
- name: sha
dtype: string
- name: path
dtype: string
- name: language
dtype: string
- name: identifier
dtype: string
- name: docstring
dtype: string
- name: function
dtype: string
- name: ast_function
dtype: string
- name: obf_function
dtype: string
- name: url
dtype: string
- name: function_sha
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 162123504
num_examples: 19561
- name: validation
num_bytes: 18843858
num_examples: 2445
- name: test
num_bytes: 19867797
num_examples: 2446
download_size: 42313088
dataset_size: 200835159
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
ArkTS Function Dataset
Dataset Hub: hreyulog/arkts-code-docstring
This dataset collects function-level information from ArkTS (HarmonyOS Ark TypeScript) projects, including original functions, docstrings, abstract syntax tree (AST) representations, obfuscated versions, and source code metadata. It is suitable for tasks such as code analysis, code understanding, AST research, and code search.
Dataset Structure
The dataset contains three splits:
train: Training setvalidation: Validation settest: Test set
Each split is a JSON Lines (.jsonl) file, where each line is a JSON object representing a single function.
Features / Columns
| Field | Type | Description |
|---|---|---|
nwo |
string | Repository name |
sha |
string | Commit SHA |
path |
string | File path |
language |
string | Programming language |
identifier |
string | Function identifier / name |
docstring |
string | Function docstring |
function |
string | Original function source code |
ast_function |
string | AST representation of the function |
obf_function |
string | Obfuscated function source code |
url |
string | URL to the code in the repository |
function_sha |
string | Function-level SHA |
source |
string | Code source (GitHub / Gitee) |
Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("USERNAME/DATASET_NAME")
# Inspect the first training example
print(dataset["train"][0])
# Check dataset features
print(dataset["train"].features)
Fine-Tuned Model
We provide a SentenceTransformer model fine-tuned on this dataset:
Citation / Preprint
If you use this dataset in your research, please cite the following preprint:
@article{2026ArkTS,
title={ArkTS-CodeSearch: A Open-Source ArkTS Dataset for Code Retrieval},
author={Your Name and Collaborators},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2026},
url={https://arxiv.org/abs/XXXX.XXXXX}
}