Aptlantis commited on
Commit
ad477a1
·
verified ·
1 Parent(s): 1247cc3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -76
README.md CHANGED
@@ -1,76 +1,82 @@
1
- # Node.js API Dataset
2
- > **Source**: [Node.js](https://nodejs.org/) official documentation (JSON variant)
3
- > **Processing Type**: Extractive, Hierarchical Flattening.
4
-
5
- ## Overview
6
- This dataset contains a structured representation of the Node.js API, derived from the official `nodejs.json` distribution. Unlike raw documentation dumps, this dataset has been processed into two distinct formats to serve different machine learning and analytical purposes: **Macro-level (Documents)** and **Micro-level (Granular Items)**.
7
-
8
- This "Dataset-as-a-Repo" approach ensures that the data is not just a transient output of a pipeline but a versioned, maintained artifact suitable for training high-quality code models.
9
-
10
- ## Methodology & Design Choices
11
-
12
- ### 1. The "Abstract-to-Concrete" Philosophy
13
- The core design philosophy here is that "code intelligence" requires understanding both the *forest* (modules, high-level concepts) and the *trees* (individual function signatures, property types).
14
- - **Raw Input**: The `nodejs.all.json` is a massive, nested structure that can be overwhelming for simple sequential models.
15
- - **Transformation**: We "pulled apart" the JSON to create focused training examples.
16
-
17
- ### 2. Dual-Format Output
18
- We intentionally avoided a "one-size-fits-all" schema.
19
- - **Documents (`nodejs_documents.jsonl`)**: Preserves the cohesiveness of a module. Good for teaching a model "concept association" (e.g., that `fs.readFile` belongs with `fs.writeFile`).
20
- - **Granular (`nodejs_granular.jsonl`)**: Isolates every single function and property. Good for "instruction tuning" (e.g., "Write a function signature for `http.createServer`").
21
-
22
- ### 3. File Formats
23
- - **JSONL**: Chosen for its streaming capabilities and human readability. Perfect for NLP pipelines.
24
- - **Parquet**: Chosen for the "Granular" dataset to allow fast columnar access, filtering, and analysis (e.g., "Find all methods with > 3 arguments").
25
-
26
- ## Dataset Structure
27
-
28
- ### Output Location
29
- All processed files are located in `output/`:
30
- ```text
31
- output/
32
- ├── nodejs_documents.jsonl # High-level module data
33
- ├── nodejs_granular.jsonl # Individual API items
34
- └── nodejs_granular.parquet # Parquet version of granular data
35
- ```
36
-
37
- ### Schema: Documents (`nodejs_documents.jsonl`)
38
- Representing a whole Module (e.g., `Buffer`, `http`).
39
- | Field | Type | Description |
40
- |-------|------|-------------|
41
- | `module_name` | string | Name of the module (e.g., `fs`). |
42
- | `type` | string | Usually `module` or `global`. |
43
- | `description` | string | Raw HTML/Markdown description of the module. |
44
- | `content` | json-string | Full nested JSON blob of the module's contents (methods, props). |
45
-
46
- ### Schema: Granular (`nodejs_granular.jsonl`)
47
- Representing a single API item (Function, Property, Event).
48
- | Field | Type | Description |
49
- |-------|------|-------------|
50
- | `id` | string | Unique namespaced ID (e.g., `fs.readFile`). |
51
- | `parent` | string | Parent module (e.g., `fs`). |
52
- | `type` | string | `method`, `property`, `event`, etc. |
53
- | `name` | string | Short name (e.g., `readFile`). |
54
- | `description` | string | Description of *just* this item. |
55
- | `metadata` | json-string | Detailed signatures, params, stability indices. |
56
-
57
- ## Use Cases
58
-
59
- ### 1. Pre-Training Code Models
60
- Feed `nodejs_documents.jsonl` into a language model to teach it the general structure and API surface of Node.js. The large context windows of modern LLMs can easily ingest entire modules.
61
-
62
- ### 2. Instruction Tuning / RAG
63
- Use `nodejs_granular.jsonl` to build a Retrieval Augmented Generation (RAG) system.
64
- - **Query**: "How do I read a file in Node?"
65
- - **Retrieval**: Search against the `description` field in the granular dataset.
66
- - **Context**: Retrieve the exact `metadata` (signature) for `fs.readFile`.
67
-
68
- ### 3. API Analysis
69
- Use `nodejs_granular.parquet` with Pandas/DuckDB to answer meta-questions:
70
- - *Which Node.js APIs are marked as Experimental?*
71
- - *What is the average number of arguments for `fs` methods vs `http` methods?*
72
-
73
- ## Provenance
74
- - **Original File**: `datasets/raw/nodejs.all.json`
75
- - **Script**: `src/process_nodejs.py`
76
- - **Maintainer**: Antigravity (Agent) / User
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - nodejs
5
+ pretty_name: nodejs-all.json
6
+ ---
7
+ # Node.js API Dataset
8
+ > **Source**: [Node.js](https://nodejs.org/) official documentation (JSON variant)
9
+ > **Processing Type**: Extractive, Hierarchical Flattening.
10
+
11
+ ## Overview
12
+ This dataset contains a structured representation of the Node.js API, derived from the official `nodejs.json` distribution. Unlike raw documentation dumps, this dataset has been processed into two distinct formats to serve different machine learning and analytical purposes: **Macro-level (Documents)** and **Micro-level (Granular Items)**.
13
+
14
+ This "Dataset-as-a-Repo" approach ensures that the data is not just a transient output of a pipeline but a versioned, maintained artifact suitable for training high-quality code models.
15
+
16
+ ## Methodology & Design Choices
17
+
18
+ ### 1. The "Abstract-to-Concrete" Philosophy
19
+ The core design philosophy here is that "code intelligence" requires understanding both the *forest* (modules, high-level concepts) and the *trees* (individual function signatures, property types).
20
+ - **Raw Input**: The `nodejs.all.json` is a massive, nested structure that can be overwhelming for simple sequential models.
21
+ - **Transformation**: We "pulled apart" the JSON to create focused training examples.
22
+
23
+ ### 2. Dual-Format Output
24
+ We intentionally avoided a "one-size-fits-all" schema.
25
+ - **Documents (`nodejs_documents.jsonl`)**: Preserves the cohesiveness of a module. Good for teaching a model "concept association" (e.g., that `fs.readFile` belongs with `fs.writeFile`).
26
+ - **Granular (`nodejs_granular.jsonl`)**: Isolates every single function and property. Good for "instruction tuning" (e.g., "Write a function signature for `http.createServer`").
27
+
28
+ ### 3. File Formats
29
+ - **JSONL**: Chosen for its streaming capabilities and human readability. Perfect for NLP pipelines.
30
+ - **Parquet**: Chosen for the "Granular" dataset to allow fast columnar access, filtering, and analysis (e.g., "Find all methods with > 3 arguments").
31
+
32
+ ## Dataset Structure
33
+
34
+ ### Output Location
35
+ All processed files are located in `output/`:
36
+ ```text
37
+ output/
38
+ ├── nodejs_documents.jsonl # High-level module data
39
+ ├── nodejs_granular.jsonl # Individual API items
40
+ └── nodejs_granular.parquet # Parquet version of granular data
41
+ ```
42
+
43
+ ### Schema: Documents (`nodejs_documents.jsonl`)
44
+ Representing a whole Module (e.g., `Buffer`, `http`).
45
+ | Field | Type | Description |
46
+ |-------|------|-------------|
47
+ | `module_name` | string | Name of the module (e.g., `fs`). |
48
+ | `type` | string | Usually `module` or `global`. |
49
+ | `description` | string | Raw HTML/Markdown description of the module. |
50
+ | `content` | json-string | Full nested JSON blob of the module's contents (methods, props). |
51
+
52
+ ### Schema: Granular (`nodejs_granular.jsonl`)
53
+ Representing a single API item (Function, Property, Event).
54
+ | Field | Type | Description |
55
+ |-------|------|-------------|
56
+ | `id` | string | Unique namespaced ID (e.g., `fs.readFile`). |
57
+ | `parent` | string | Parent module (e.g., `fs`). |
58
+ | `type` | string | `method`, `property`, `event`, etc. |
59
+ | `name` | string | Short name (e.g., `readFile`). |
60
+ | `description` | string | Description of *just* this item. |
61
+ | `metadata` | json-string | Detailed signatures, params, stability indices. |
62
+
63
+ ## Use Cases
64
+
65
+ ### 1. Pre-Training Code Models
66
+ Feed `nodejs_documents.jsonl` into a language model to teach it the general structure and API surface of Node.js. The large context windows of modern LLMs can easily ingest entire modules.
67
+
68
+ ### 2. Instruction Tuning / RAG
69
+ Use `nodejs_granular.jsonl` to build a Retrieval Augmented Generation (RAG) system.
70
+ - **Query**: "How do I read a file in Node?"
71
+ - **Retrieval**: Search against the `description` field in the granular dataset.
72
+ - **Context**: Retrieve the exact `metadata` (signature) for `fs.readFile`.
73
+
74
+ ### 3. API Analysis
75
+ Use `nodejs_granular.parquet` with Pandas/DuckDB to answer meta-questions:
76
+ - *Which Node.js APIs are marked as Experimental?*
77
+ - *What is the average number of arguments for `fs` methods vs `http` methods?*
78
+
79
+ ## Provenance
80
+ - **Original File**: `datasets/raw/nodejs.all.json`
81
+ - **Script**: `src/process_nodejs.py`
82
+ - **Maintainer**: Antigravity (Agent) / User