File size: 6,055 Bytes
d025e7f
 
 
 
 
2f38d8f
d025e7f
2f38d8f
d025e7f
2f38d8f
d025e7f
2f38d8f
d025e7f
2f38d8f
d025e7f
2f38d8f
d025e7f
 
2f38d8f
 
d025e7f
2f38d8f
d025e7f
 
d16e2c0
 
2f38d8f
d16e2c0
2f38d8f
d16e2c0
 
d025e7f
 
 
 
 
2f38d8f
 
6071ea2
 
 
 
 
 
 
 
 
 
 
d025e7f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6071ea2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
---
license: mit
dataset_info:
  features:
  - name: title
    dtype: string
  - name: source
    dtype: string
  - name: url
    dtype: string
  - name: category
    dtype: string
  - name: language
    dtype: string
  - name: content
    dtype: string
  - name: chunk_id
    dtype: int64
  - name: chunk_length
    dtype: int64
  - name: last_updated
    dtype: string
  splits:
  - name: train
    num_bytes: 401051216
    num_examples: 426107
  - name: test
    num_bytes: 941198
    num_examples: 1000
  download_size: 180107389
  dataset_size: 401992414
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
task_categories:
- question-answering
- summarization
- text-generation
language:
- en
tags:
- code
pretty_name: 'DevBase '
size_categories:
- 100K<n<1M
---


# Dev Knowledge Base (Programming Documentation Dataset)

A large-scale, structured dataset of programming documentation collected from official sources across languages, frameworks, tools, and AI ecosystems.

Do Follow me on Github: https://github.com/nuhmanpk

---

## Overview

This dataset contains cleaned and structured documentation content scraped from official developer docs across multiple domains such as:

* Programming languages
* Frameworks (frontend, backend)
* DevOps & infrastructure tools
* Databases
* Machine learning & AI libraries

All content is chunked (~800 characters) and optimized for:

* Retrieval-Augmented Generation (RAG)
* Developer copilots
* Code assistants
* Semantic search

---

## Dataset Structure

Each row represents a chunk of documentation.

| Column       | Description                                |
| ------------ | ------------------------------------------ |
| title        | Page title or endpoint                     |
| source       | Source name (e.g., react, python, fastapi) |
| url          | Original documentation URL                 |
| category     | Type (language, framework, database, etc.) |
| language     | Programming language                       |
| content      | Cleaned text chunk                         |
| chunk_id     | Chunk index within page                    |
| chunk_length | Character length                           |
| last_updated | Timestamp                                  |

---

## Sources Included

### Languages

python, javascript, typescript, go, rust, java, csharp, dart, swift, kotlin

### Frontend & Frameworks

react, nextjs, vue, nuxt, svelte, sveltekit, angular, astro, qwik, solidjs

### Backend & APIs

fastapi, django, flask, express, nestjs, hono, elysia

### Runtime & Tooling

nodejs, deno, bun, vite, webpack, turborepo, nx, pnpm, biome

### UI Libraries

tailwind, shadcn_ui, chakra_ui, mui

### Mobile & Desktop

react_native, expo, flutter, tauri, electron

### Machine Learning & AI

numpy, pandas, pytorch, tensorflow, scikit_learn, xgboost, lightgbm
transformers, langchain, llamaindex, openai, vllm, ollama, haystack
mastra, pydantic_ai, langfuse, mcp

### Databases

postgresql, mysql, sqlite, mongodb, redis, supabase, firebase
planetscale, neon, convex, drizzle_orm, qdrant, turso

### DevOps & Infrastructure

docker, kubernetes, terraform, ansible
github_actions, gitlab_ci, git, opentelemetry, inngest, temporal

### Other

claude_agent_sdk

Full crawl configuration available here: 

---

## Chunk Distribution

Example distribution after cleaning and removing Zig:

| Source       | Chunks   |
| ------------ | -------- |
| python       | ~15,000  |
| javascript   | ~4,000   |
| go           | ~8,000   |
| react        | ~3,000   |
| nextjs       | ~4,000   |
| docker       | ~4,000   |
| kubernetes   | ~14,000  |
| transformers | ~14,000  |
| firebase     | ~300,000 |
| redis        | ~17,000  |
| git          | ~14,000  |
| flutter      | ~14,000  |
| supabase     | ~10,000  |

Total: **millions of chunks across 80+ sources**

---

## How to Use (Hugging Face)

### Install

```bash
pip install datasets
```

### Load Dataset

```python
from datasets import load_dataset

dataset = load_dataset("nuhmanpk/dev-knowledge-base")

print(dataset["train"][0])
```

---

## Example Use Cases

### 1. Semantic Search

```python
from sentence_transformers import SentenceTransformer
import numpy as np

model = SentenceTransformer("all-MiniLM-L6-v2")

docs = [x["content"] for x in dataset["train"][:1000]]
embeddings = model.encode(docs)

query = "how to build api with fastapi"
q_emb = model.encode([query])

scores = np.dot(embeddings, q_emb.T).squeeze()
print(docs[scores.argmax()])
```

---

### 2. RAG Pipeline

```text
User Query → Embed → Vector DB → Retrieve → LLM → Answer
```

Use with:

* FAISS
* Qdrant
* Pinecone

---

### 3. Fine-tuning

Convert to instruction format:

```json
{
  "instruction": "Explain JWT authentication",
  "input": "",
  "output": "<documentation chunk>"
}
```

---

### 4. Developer Chatbot

Build:

* AI coding assistant
* StackOverflow-style search
* Internal dev knowledge system

---

## Data Processing Pipeline

* Async crawling with rate limiting
* HTML parsing (BeautifulSoup)
* Navigation/content filtering
* Chunking (~800 chars)
* Cleaning & binary removal

Crawler implementation: 

---

## Limitations

* Some duplicate content may exist
* Chunk-level context only (not full pages)
* No semantic labeling yet
* Some sources larger than others

---

## Future Improvements

* Deduplication
* Better chunking (semantic splitting)
* Q/A generation
* Code extraction
* Metadata enrichment

---

## License

This dataset is built from publicly available documentation.
Refer to individual sources for licensing.

---

## Author

https://github.com/nuhmanpk

---

## Quick Example

```python
from datasets import load_dataset

ds = load_dataset("nuhmanpk/dev-knowledge-base")

for row in ds["train"].select(range(3)):
    print(row["source"], "→", row["content"][:150])
```

---

## Summary

A large, structured, and practical dataset for building developer-focused AI systems from code assistants to full RAG pipelines.

---