license: mit
tags:
- nodejs
pretty_name: nodejs-all.json
Node.js API Dataset
Source: Node.js official documentation (JSON variant) Processing Type: Extractive, Hierarchical Flattening.
Overview
This dataset contains a structured representation of the Node.js API, derived from the official nodejs.json distribution. Unlike raw documentation dumps, this dataset has been processed into two distinct formats to serve different machine learning and analytical purposes: Macro-level (Documents) and Micro-level (Granular Items).
This "Dataset-as-a-Repo" approach ensures that the data is not just a transient output of a pipeline but a versioned, maintained artifact suitable for training high-quality code models.
Methodology & Design Choices
1. The "Abstract-to-Concrete" Philosophy
The core design philosophy here is that "code intelligence" requires understanding both the forest (modules, high-level concepts) and the trees (individual function signatures, property types).
- Raw Input: The
nodejs.all.jsonis a massive, nested structure that can be overwhelming for simple sequential models. - Transformation: We "pulled apart" the JSON to create focused training examples.
2. Dual-Format Output
We intentionally avoided a "one-size-fits-all" schema.
- Documents (
nodejs_documents.jsonl): Preserves the cohesiveness of a module. Good for teaching a model "concept association" (e.g., thatfs.readFilebelongs withfs.writeFile). - Granular (
nodejs_granular.jsonl): Isolates every single function and property. Good for "instruction tuning" (e.g., "Write a function signature forhttp.createServer").
3. File Formats
- JSONL: Chosen for its streaming capabilities and human readability. Perfect for NLP pipelines.
- Parquet: Chosen for the "Granular" dataset to allow fast columnar access, filtering, and analysis (e.g., "Find all methods with > 3 arguments").
Dataset Structure
Output Location
All processed files are located in output/:
output/
├── nodejs_documents.jsonl # High-level module data
├── nodejs_granular.jsonl # Individual API items
└── nodejs_granular.parquet # Parquet version of granular data
Schema: Documents (nodejs_documents.jsonl)
Representing a whole Module (e.g., Buffer, http).
| Field | Type | Description |
|---|---|---|
module_name |
string | Name of the module (e.g., fs). |
type |
string | Usually module or global. |
description |
string | Raw HTML/Markdown description of the module. |
content |
json-string | Full nested JSON blob of the module's contents (methods, props). |
Schema: Granular (nodejs_granular.jsonl)
Representing a single API item (Function, Property, Event).
| Field | Type | Description |
|---|---|---|
id |
string | Unique namespaced ID (e.g., fs.readFile). |
parent |
string | Parent module (e.g., fs). |
type |
string | method, property, event, etc. |
name |
string | Short name (e.g., readFile). |
description |
string | Description of just this item. |
metadata |
json-string | Detailed signatures, params, stability indices. |
Use Cases
1. Pre-Training Code Models
Feed nodejs_documents.jsonl into a language model to teach it the general structure and API surface of Node.js. The large context windows of modern LLMs can easily ingest entire modules.
2. Instruction Tuning / RAG
Use nodejs_granular.jsonl to build a Retrieval Augmented Generation (RAG) system.
- Query: "How do I read a file in Node?"
- Retrieval: Search against the
descriptionfield in the granular dataset. - Context: Retrieve the exact
metadata(signature) forfs.readFile.
3. API Analysis
Use nodejs_granular.parquet with Pandas/DuckDB to answer meta-questions:
- Which Node.js APIs are marked as Experimental?
- What is the average number of arguments for
fsmethods vshttpmethods?
Provenance
- Original File:
datasets/raw/nodejs.all.json - Script:
src/process_nodejs.py - Maintainer: Antigravity (Agent) / User