saidsef commited on
Commit
791cd05
Β·
verified Β·
1 Parent(s): b00b1d8

chore(docs): add read me to dataset

Browse files
Files changed (1) hide show
  1. README.md +217 -0
README.md CHANGED
@@ -21,3 +21,220 @@ configs:
21
  - split: train
22
  path: data/train-*
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  - split: train
22
  path: data/train-*
23
  ---
24
+
25
+ # Knowledge Base Documentation Dataset
26
+
27
+ A comprehensive, pre-processed and vectorized dataset containing documentation from 25+ popular open-source projects and cloud platforms, optimized for Retrieval-Augmented Generation (RAG) applications.
28
+
29
+ ## πŸ“Š Dataset Overview
30
+
31
+ This dataset aggregates technical documentation from leading open-source projects across cloud-native, DevOps, machine learning, and infrastructure domains. Each document has been chunked and embedded using the `all-MiniLM-L6-v2` sentence transformer model.
32
+
33
+ **Dataset ID**: `saidsef/knowledge-base-docs`
34
+
35
+ ## 🎯 Sources
36
+
37
+ The dataset includes documentation from the following projects:
38
+
39
+ | Source | Domain | File Types |
40
+ |--------|--------|------------|
41
+ | **kubernetes** | Container Orchestration | Markdown |
42
+ | **terraform** | Infrastructure as Code | MDX |
43
+ | **kustomize** | Kubernetes Configuration | Markdown |
44
+ | **ingress-nginx** | Kubernetes Ingress | Markdown |
45
+ | **helm** | Package Management | Markdown |
46
+ | **external-secrets** | Secrets Management | Markdown |
47
+ | **prometheus** | Monitoring | Markdown |
48
+ | **argo-cd** | GitOps | Markdown |
49
+ | **istio** | Service Mesh | Markdown |
50
+ | **scikit-learn** | Machine Learning | RST |
51
+ | **cilium** | Networking & Security | RST |
52
+ | **redis** | In-Memory Database | Markdown |
53
+ | **grafana** | Observability | Markdown |
54
+ | **docker** | Containerization | Markdown |
55
+ | **linux** | Operating System | RST |
56
+ | **ckad-exercises** | Kubernetes Certification | Markdown |
57
+ | **aws-eks-best-practices** | AWS EKS | Markdown |
58
+ | **gcp-professional-services** | Google Cloud | Markdown |
59
+ | **external-dns** | DNS Management | Markdown |
60
+ | **google-kubernetes-engine** | GKE | Markdown |
61
+ | **consul** | Service Mesh | Markdown |
62
+ | **vault** | Secrets Management | MDX |
63
+ | **tekton** | CI/CD | Markdown |
64
+ | **model-context-protocol-mcp** | AI Context Protocol | Markdown |
65
+
66
+ ## πŸ“‹ Dataset Schema
67
+
68
+ Each row in the dataset contains the following fields:
69
+
70
+ | Field | Type | Description |
71
+ |-------|------|-------------|
72
+ | `content` | string | Chunked text content (500 words with 50-word overlap) |
73
+ | `original_id` | int/float | Reference to the original document ID |
74
+ | `embeddings` | list[float] | 384-dimensional embedding vector from `all-MiniLM-L6-v2` |
75
+
76
+ ## πŸ”§ Dataset Creation Process
77
+
78
+ ### 1. **Data Collection**
79
+ - Shallow clone of 25+ GitHub repositories
80
+ - Extraction of documentation files (`.md`, `.mdx`, `.rst`)
81
+
82
+ ### 2. **Content Processing**
83
+ - Removal of YAML frontmatter
84
+ - Conversion to LLM-friendly markdown format
85
+ - Stripping of scripts, styles, and media elements
86
+ - Code block preservation with proper formatting
87
+
88
+ ### 3. **Text Chunking**
89
+ - **Chunk size**: 500 words
90
+ - **Overlap**: 50 words
91
+ - Ensures semantic continuity across chunks
92
+
93
+ ### 4. **Vectorization**
94
+ - **Model**: `all-MiniLM-L6-v2`
95
+ - **Embedding dimensions**: 384
96
+ - **Normalization**: Enabled for cosine similarity
97
+ - Pre-computed embeddings for fast retrieval
98
+
99
+ ### 5. **Storage Format**
100
+ - **Format**: Apache Parquet
101
+ - **Compression**: Optimized for query performance
102
+ - **File**: `knowledge_base.parquet`
103
+
104
+ ## πŸ’» Usage Examples
105
+
106
+ ### Loading the Dataset
107
+
108
+ ```python
109
+ import pandas as pd
110
+ from datasets import load_dataset
111
+
112
+ # From Hugging Face Hub
113
+ dataset = load_dataset("saidsef/knowledge-base-docs")
114
+ df = dataset['train'].to_pandas()
115
+
116
+ # From local Parquet file
117
+ df = pd.read_parquet("knowledge_base.parquet", engine="pyarrow")
118
+ ```
119
+
120
+ ### Semantic Search / RAG Implementation
121
+
122
+ ```python
123
+ import numpy as np
124
+ from sentence_transformers import SentenceTransformer
125
+
126
+ # Load the same model used for embedding
127
+ model = SentenceTransformer('all-MiniLM-L6-v2', trust_remote_code=True)
128
+
129
+ def retrieve(query, df, k=5):
130
+ """Retrieve top-k most relevant documents using cosine similarity"""
131
+ # Encode the query
132
+ query_vec = model.encode(query, normalize_embeddings=True)
133
+
134
+ # Convert embeddings to matrix
135
+ embeddings_matrix = np.vstack(df['embeddings'].values)
136
+
137
+ # Calculate cosine similarity
138
+ norms = np.linalg.norm(embeddings_matrix, axis=1) * np.linalg.norm(query_vec)
139
+ scores = np.dot(embeddings_matrix, query_vec) / norms
140
+
141
+ # Add scores and sort
142
+ df['score'] = scores
143
+ return df.sort_values(by='score', ascending=False).head(k)
144
+
145
+ # Example query
146
+ results = retrieve("How do I configure an nginx ingress controller?", df, k=3)
147
+ print(results[['content', 'score']])
148
+ ```
149
+
150
+ ### Building a RAG Pipeline
151
+
152
+ ```python
153
+ from transformers import pipeline
154
+
155
+ # Load a question-answering model
156
+ qa_pipeline = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")
157
+
158
+ def rag_answer(question, df, k=3):
159
+ """RAG: Retrieve relevant context and generate answer"""
160
+ # Retrieve relevant documents
161
+ context_rows = retrieve(question, df, k=k)
162
+ context_text = " ".join(context_rows['content'].tolist())
163
+
164
+ # Generate answer
165
+ result = qa_pipeline(question=question, context=context_text)
166
+ return result['answer'], context_rows
167
+
168
+ answer, sources = rag_answer("What is a Kubernetes pod?", df)
169
+ print(f"Answer: {answer}")
170
+ ```
171
+
172
+ ## πŸ“ˆ Dataset Statistics
173
+
174
+ ```python
175
+ # Total chunks
176
+ print(f"Total chunks: {len(df)}")
177
+
178
+ # Average chunk length
179
+ df['chunk_length'] = df['content'].apply(lambda x: len(x.split()))
180
+ print(f"Average chunk length: {df['chunk_length'].mean():.0f} words")
181
+
182
+ # Embedding dimensionality
183
+ print(f"Embedding dimensions: {len(df['embeddings'].iloc[0])}")
184
+ ```
185
+
186
+ ## πŸš€ Use Cases
187
+
188
+ - **RAG Applications**: Build retrieval-augmented generation systems
189
+ - **Semantic Search**: Find relevant documentation across multiple projects
190
+ - **Question Answering**: Create technical support chatbots
191
+ - **Documentation Assistant**: Help developers navigate complex documentation
192
+ - **Learning Resources**: Train models on high-quality technical content
193
+ - **Comparative Analysis**: Compare documentation approaches across projects
194
+
195
+ ## πŸ” Performance Considerations
196
+
197
+ - **Pre-computed embeddings**: No need for runtime encoding
198
+ - **Optimized retrieval**: Matrix multiplication for fast cosine similarity
199
+ - **Parquet format**: Efficient storage and query performance
200
+ - **Chunk overlap**: Better context preservation across boundaries
201
+
202
+ ## πŸ› οΈ Requirements
203
+
204
+ ```txt
205
+ pandas>=2.0.0
206
+ numpy>=1.24.0
207
+ sentence-transformers>=2.0.0
208
+ pyarrow>=12.0.0
209
+ datasets>=2.0.0
210
+ ```
211
+
212
+ ## πŸ“ License
213
+
214
+ This dataset is a compilation of documentation from various open-source projects. Each source maintains its original license:
215
+ - Most projects use Apache 2.0 or MIT licenses
216
+ - Refer to individual project repositories for specific licensing terms
217
+
218
+ ## 🀝 Contributing
219
+
220
+ To add new sources or update existing documentation:
221
+
222
+ 1. Add the source configuration to the `sites` list
223
+ 2. Run the data collection pipeline
224
+ 3. Verify content processing and embedding quality
225
+ 4. Submit a pull request with updated dataset
226
+
227
+ ## πŸ“§ Contact
228
+
229
+ For questions, issues, or suggestions, please open an issue on the GitHub repository or contact the maintainer.
230
+
231
+ ## πŸ™ Acknowledgments
232
+
233
+ Special thanks to all the open-source projects that maintain excellent documentation, making this dataset possible.
234
+
235
+ ---
236
+
237
+ **Last Updated**: December 2025
238
+ **Version**: 1.0
239
+ **Embedding Model**: all-MiniLM-L6-v2
240
+ **Total Sources**: 25+