Upload 5 files
Browse files- .gitattributes +2 -0
- README.md +76 -0
- STATS.md +13 -0
- nodejs_documents.jsonl +3 -0
- nodejs_granular.jsonl +3 -0
- nodejs_granular.parquet +3 -0
.gitattributes
CHANGED
|
@@ -57,3 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
nodejs_documents.jsonl filter=lfs diff=lfs merge=lfs -text
|
| 61 |
+
nodejs_granular.jsonl filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
|
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Node.js API Dataset
|
| 2 |
+
> **Source**: [Node.js](https://nodejs.org/) official documentation (JSON variant)
|
| 3 |
+
> **Processing Type**: Extractive, Hierarchical Flattening.
|
| 4 |
+
|
| 5 |
+
## Overview
|
| 6 |
+
This dataset contains a structured representation of the Node.js API, derived from the official `nodejs.json` distribution. Unlike raw documentation dumps, this dataset has been processed into two distinct formats to serve different machine learning and analytical purposes: **Macro-level (Documents)** and **Micro-level (Granular Items)**.
|
| 7 |
+
|
| 8 |
+
This "Dataset-as-a-Repo" approach ensures that the data is not just a transient output of a pipeline but a versioned, maintained artifact suitable for training high-quality code models.
|
| 9 |
+
|
| 10 |
+
## Methodology & Design Choices
|
| 11 |
+
|
| 12 |
+
### 1. The "Abstract-to-Concrete" Philosophy
|
| 13 |
+
The core design philosophy here is that "code intelligence" requires understanding both the *forest* (modules, high-level concepts) and the *trees* (individual function signatures, property types).
|
| 14 |
+
- **Raw Input**: The `nodejs.all.json` is a massive, nested structure that can be overwhelming for simple sequential models.
|
| 15 |
+
- **Transformation**: We "pulled apart" the JSON to create focused training examples.
|
| 16 |
+
|
| 17 |
+
### 2. Dual-Format Output
|
| 18 |
+
We intentionally avoided a "one-size-fits-all" schema.
|
| 19 |
+
- **Documents (`nodejs_documents.jsonl`)**: Preserves the cohesiveness of a module. Good for teaching a model "concept association" (e.g., that `fs.readFile` belongs with `fs.writeFile`).
|
| 20 |
+
- **Granular (`nodejs_granular.jsonl`)**: Isolates every single function and property. Good for "instruction tuning" (e.g., "Write a function signature for `http.createServer`").
|
| 21 |
+
|
| 22 |
+
### 3. File Formats
|
| 23 |
+
- **JSONL**: Chosen for its streaming capabilities and human readability. Perfect for NLP pipelines.
|
| 24 |
+
- **Parquet**: Chosen for the "Granular" dataset to allow fast columnar access, filtering, and analysis (e.g., "Find all methods with > 3 arguments").
|
| 25 |
+
|
| 26 |
+
## Dataset Structure
|
| 27 |
+
|
| 28 |
+
### Output Location
|
| 29 |
+
All processed files are located in `output/`:
|
| 30 |
+
```text
|
| 31 |
+
output/
|
| 32 |
+
├── nodejs_documents.jsonl # High-level module data
|
| 33 |
+
├── nodejs_granular.jsonl # Individual API items
|
| 34 |
+
└── nodejs_granular.parquet # Parquet version of granular data
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
### Schema: Documents (`nodejs_documents.jsonl`)
|
| 38 |
+
Representing a whole Module (e.g., `Buffer`, `http`).
|
| 39 |
+
| Field | Type | Description |
|
| 40 |
+
|-------|------|-------------|
|
| 41 |
+
| `module_name` | string | Name of the module (e.g., `fs`). |
|
| 42 |
+
| `type` | string | Usually `module` or `global`. |
|
| 43 |
+
| `description` | string | Raw HTML/Markdown description of the module. |
|
| 44 |
+
| `content` | json-string | Full nested JSON blob of the module's contents (methods, props). |
|
| 45 |
+
|
| 46 |
+
### Schema: Granular (`nodejs_granular.jsonl`)
|
| 47 |
+
Representing a single API item (Function, Property, Event).
|
| 48 |
+
| Field | Type | Description |
|
| 49 |
+
|-------|------|-------------|
|
| 50 |
+
| `id` | string | Unique namespaced ID (e.g., `fs.readFile`). |
|
| 51 |
+
| `parent` | string | Parent module (e.g., `fs`). |
|
| 52 |
+
| `type` | string | `method`, `property`, `event`, etc. |
|
| 53 |
+
| `name` | string | Short name (e.g., `readFile`). |
|
| 54 |
+
| `description` | string | Description of *just* this item. |
|
| 55 |
+
| `metadata` | json-string | Detailed signatures, params, stability indices. |
|
| 56 |
+
|
| 57 |
+
## Use Cases
|
| 58 |
+
|
| 59 |
+
### 1. Pre-Training Code Models
|
| 60 |
+
Feed `nodejs_documents.jsonl` into a language model to teach it the general structure and API surface of Node.js. The large context windows of modern LLMs can easily ingest entire modules.
|
| 61 |
+
|
| 62 |
+
### 2. Instruction Tuning / RAG
|
| 63 |
+
Use `nodejs_granular.jsonl` to build a Retrieval Augmented Generation (RAG) system.
|
| 64 |
+
- **Query**: "How do I read a file in Node?"
|
| 65 |
+
- **Retrieval**: Search against the `description` field in the granular dataset.
|
| 66 |
+
- **Context**: Retrieve the exact `metadata` (signature) for `fs.readFile`.
|
| 67 |
+
|
| 68 |
+
### 3. API Analysis
|
| 69 |
+
Use `nodejs_granular.parquet` with Pandas/DuckDB to answer meta-questions:
|
| 70 |
+
- *Which Node.js APIs are marked as Experimental?*
|
| 71 |
+
- *What is the average number of arguments for `fs` methods vs `http` methods?*
|
| 72 |
+
|
| 73 |
+
## Provenance
|
| 74 |
+
- **Original File**: `datasets/raw/nodejs.all.json`
|
| 75 |
+
- **Script**: `src/process_nodejs.py`
|
| 76 |
+
- **Maintainer**: Antigravity (Agent) / User
|
STATS.md
ADDED
|
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Node.js Dataset Statistics
|
| 2 |
+
|
| 3 |
+
**Generated:** 2026-02-02 21:49:40.129088
|
| 4 |
+
**Source File:** d:/Projects/Datasets/raw/nodejs.all.json
|
| 5 |
+
|
| 6 |
+
## Statistics
|
| 7 |
+
- **Total Modules/Documents**: 26883
|
| 8 |
+
- **Total Granular Items (Functions/Props/Events)**: 570380
|
| 9 |
+
|
| 10 |
+
## Outputs
|
| 11 |
+
- `output/nodejs_documents.jsonl`: One line per module (high level context).
|
| 12 |
+
- `output/nodejs_granular.jsonl`: One line per API item (fine-tuning ready).
|
| 13 |
+
- `output/nodejs_granular.parquet`: Efficient columnar storage of granular data.
|
nodejs_documents.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3f62d31e8ed107770a083687b78e29539bac0318996e03e5027ebaf9924ca2ba
|
| 3 |
+
size 723690415
|
nodejs_granular.jsonl
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:244282bbfe4a163c1ff2cb2ba4ba59e07b845bfc57396a500b18f9bcc2516a5c
|
| 3 |
+
size 694821483
|
nodejs_granular.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f953b45496d4d61b1a877fb68cd6c5b8ffa8caccc9a5a281fd881165de1ad951
|
| 3 |
+
size 153811230
|