Update README.md
Browse files
README.md
CHANGED
|
@@ -1,76 +1,82 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
- **
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
###
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
|
| 49 |
-
|
| 50 |
-
| `
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
Use
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
###
|
| 69 |
-
Use `nodejs_granular.
|
| 70 |
-
-
|
| 71 |
-
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- nodejs
|
| 5 |
+
pretty_name: nodejs-all.json
|
| 6 |
+
---
|
| 7 |
+
# Node.js API Dataset
|
| 8 |
+
> **Source**: [Node.js](https://nodejs.org/) official documentation (JSON variant)
|
| 9 |
+
> **Processing Type**: Extractive, Hierarchical Flattening.
|
| 10 |
+
|
| 11 |
+
## Overview
|
| 12 |
+
This dataset contains a structured representation of the Node.js API, derived from the official `nodejs.json` distribution. Unlike raw documentation dumps, this dataset has been processed into two distinct formats to serve different machine learning and analytical purposes: **Macro-level (Documents)** and **Micro-level (Granular Items)**.
|
| 13 |
+
|
| 14 |
+
This "Dataset-as-a-Repo" approach ensures that the data is not just a transient output of a pipeline but a versioned, maintained artifact suitable for training high-quality code models.
|
| 15 |
+
|
| 16 |
+
## Methodology & Design Choices
|
| 17 |
+
|
| 18 |
+
### 1. The "Abstract-to-Concrete" Philosophy
|
| 19 |
+
The core design philosophy here is that "code intelligence" requires understanding both the *forest* (modules, high-level concepts) and the *trees* (individual function signatures, property types).
|
| 20 |
+
- **Raw Input**: The `nodejs.all.json` is a massive, nested structure that can be overwhelming for simple sequential models.
|
| 21 |
+
- **Transformation**: We "pulled apart" the JSON to create focused training examples.
|
| 22 |
+
|
| 23 |
+
### 2. Dual-Format Output
|
| 24 |
+
We intentionally avoided a "one-size-fits-all" schema.
|
| 25 |
+
- **Documents (`nodejs_documents.jsonl`)**: Preserves the cohesiveness of a module. Good for teaching a model "concept association" (e.g., that `fs.readFile` belongs with `fs.writeFile`).
|
| 26 |
+
- **Granular (`nodejs_granular.jsonl`)**: Isolates every single function and property. Good for "instruction tuning" (e.g., "Write a function signature for `http.createServer`").
|
| 27 |
+
|
| 28 |
+
### 3. File Formats
|
| 29 |
+
- **JSONL**: Chosen for its streaming capabilities and human readability. Perfect for NLP pipelines.
|
| 30 |
+
- **Parquet**: Chosen for the "Granular" dataset to allow fast columnar access, filtering, and analysis (e.g., "Find all methods with > 3 arguments").
|
| 31 |
+
|
| 32 |
+
## Dataset Structure
|
| 33 |
+
|
| 34 |
+
### Output Location
|
| 35 |
+
All processed files are located in `output/`:
|
| 36 |
+
```text
|
| 37 |
+
output/
|
| 38 |
+
├── nodejs_documents.jsonl # High-level module data
|
| 39 |
+
├── nodejs_granular.jsonl # Individual API items
|
| 40 |
+
└── nodejs_granular.parquet # Parquet version of granular data
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
### Schema: Documents (`nodejs_documents.jsonl`)
|
| 44 |
+
Representing a whole Module (e.g., `Buffer`, `http`).
|
| 45 |
+
| Field | Type | Description |
|
| 46 |
+
|-------|------|-------------|
|
| 47 |
+
| `module_name` | string | Name of the module (e.g., `fs`). |
|
| 48 |
+
| `type` | string | Usually `module` or `global`. |
|
| 49 |
+
| `description` | string | Raw HTML/Markdown description of the module. |
|
| 50 |
+
| `content` | json-string | Full nested JSON blob of the module's contents (methods, props). |
|
| 51 |
+
|
| 52 |
+
### Schema: Granular (`nodejs_granular.jsonl`)
|
| 53 |
+
Representing a single API item (Function, Property, Event).
|
| 54 |
+
| Field | Type | Description |
|
| 55 |
+
|-------|------|-------------|
|
| 56 |
+
| `id` | string | Unique namespaced ID (e.g., `fs.readFile`). |
|
| 57 |
+
| `parent` | string | Parent module (e.g., `fs`). |
|
| 58 |
+
| `type` | string | `method`, `property`, `event`, etc. |
|
| 59 |
+
| `name` | string | Short name (e.g., `readFile`). |
|
| 60 |
+
| `description` | string | Description of *just* this item. |
|
| 61 |
+
| `metadata` | json-string | Detailed signatures, params, stability indices. |
|
| 62 |
+
|
| 63 |
+
## Use Cases
|
| 64 |
+
|
| 65 |
+
### 1. Pre-Training Code Models
|
| 66 |
+
Feed `nodejs_documents.jsonl` into a language model to teach it the general structure and API surface of Node.js. The large context windows of modern LLMs can easily ingest entire modules.
|
| 67 |
+
|
| 68 |
+
### 2. Instruction Tuning / RAG
|
| 69 |
+
Use `nodejs_granular.jsonl` to build a Retrieval Augmented Generation (RAG) system.
|
| 70 |
+
- **Query**: "How do I read a file in Node?"
|
| 71 |
+
- **Retrieval**: Search against the `description` field in the granular dataset.
|
| 72 |
+
- **Context**: Retrieve the exact `metadata` (signature) for `fs.readFile`.
|
| 73 |
+
|
| 74 |
+
### 3. API Analysis
|
| 75 |
+
Use `nodejs_granular.parquet` with Pandas/DuckDB to answer meta-questions:
|
| 76 |
+
- *Which Node.js APIs are marked as Experimental?*
|
| 77 |
+
- *What is the average number of arguments for `fs` methods vs `http` methods?*
|
| 78 |
+
|
| 79 |
+
## Provenance
|
| 80 |
+
- **Original File**: `datasets/raw/nodejs.all.json`
|
| 81 |
+
- **Script**: `src/process_nodejs.py`
|
| 82 |
+
- **Maintainer**: Antigravity (Agent) / User
|