File size: 7,671 Bytes
b00b1d8 4a42c25 9505f49 4a42c25 b00b1d8 4a42c25 b00b1d8 791cd05 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
---
dataset_info:
features:
- name: content
dtype: large_string
- name: url
dtype: large_string
- name: branch
dtype: large_string
- name: source
dtype: large_string
- name: embeddings
list: float64
- name: score
dtype: float64
splits:
- name: train
num_bytes: 103214272
num_examples: 15084
download_size: 57429042
dataset_size: 103214272
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Knowledge Base Documentation Dataset
A comprehensive, pre-processed and vectorized dataset containing documentation from 25+ popular open-source projects and cloud platforms, optimized for Retrieval-Augmented Generation (RAG) applications.
## π Dataset Overview
This dataset aggregates technical documentation from leading open-source projects across cloud-native, DevOps, machine learning, and infrastructure domains. Each document has been chunked and embedded using the `all-MiniLM-L6-v2` sentence transformer model.
**Dataset ID**: `saidsef/knowledge-base-docs`
## π― Sources
The dataset includes documentation from the following projects:
| Source | Domain | File Types |
|--------|--------|------------|
| **kubernetes** | Container Orchestration | Markdown |
| **terraform** | Infrastructure as Code | MDX |
| **kustomize** | Kubernetes Configuration | Markdown |
| **ingress-nginx** | Kubernetes Ingress | Markdown |
| **helm** | Package Management | Markdown |
| **external-secrets** | Secrets Management | Markdown |
| **prometheus** | Monitoring | Markdown |
| **argo-cd** | GitOps | Markdown |
| **istio** | Service Mesh | Markdown |
| **scikit-learn** | Machine Learning | RST |
| **cilium** | Networking & Security | RST |
| **redis** | In-Memory Database | Markdown |
| **grafana** | Observability | Markdown |
| **docker** | Containerization | Markdown |
| **linux** | Operating System | RST |
| **ckad-exercises** | Kubernetes Certification | Markdown |
| **aws-eks-best-practices** | AWS EKS | Markdown |
| **gcp-professional-services** | Google Cloud | Markdown |
| **external-dns** | DNS Management | Markdown |
| **google-kubernetes-engine** | GKE | Markdown |
| **consul** | Service Mesh | Markdown |
| **vault** | Secrets Management | MDX |
| **tekton** | CI/CD | Markdown |
| **model-context-protocol-mcp** | AI Context Protocol | Markdown |
## π Dataset Schema
Each row in the dataset contains the following fields:
| Field | Type | Description |
|-------|------|-------------|
| `content` | string | Chunked text content (500 words with 50-word overlap) |
| `original_id` | int/float | Reference to the original document ID |
| `embeddings` | list[float] | 384-dimensional embedding vector from `all-MiniLM-L6-v2` |
## π§ Dataset Creation Process
### 1. **Data Collection**
- Shallow clone of 25+ GitHub repositories
- Extraction of documentation files (`.md`, `.mdx`, `.rst`)
### 2. **Content Processing**
- Removal of YAML frontmatter
- Conversion to LLM-friendly markdown format
- Stripping of scripts, styles, and media elements
- Code block preservation with proper formatting
### 3. **Text Chunking**
- **Chunk size**: 500 words
- **Overlap**: 50 words
- Ensures semantic continuity across chunks
### 4. **Vectorization**
- **Model**: `all-MiniLM-L6-v2`
- **Embedding dimensions**: 384
- **Normalization**: Enabled for cosine similarity
- Pre-computed embeddings for fast retrieval
### 5. **Storage Format**
- **Format**: Apache Parquet
- **Compression**: Optimized for query performance
- **File**: `knowledge_base.parquet`
## π» Usage Examples
### Loading the Dataset
```python
import pandas as pd
from datasets import load_dataset
# From Hugging Face Hub
dataset = load_dataset("saidsef/knowledge-base-docs")
df = dataset['train'].to_pandas()
# From local Parquet file
df = pd.read_parquet("knowledge_base.parquet", engine="pyarrow")
```
### Semantic Search / RAG Implementation
```python
import numpy as np
from sentence_transformers import SentenceTransformer
# Load the same model used for embedding
model = SentenceTransformer('all-MiniLM-L6-v2', trust_remote_code=True)
def retrieve(query, df, k=5):
"""Retrieve top-k most relevant documents using cosine similarity"""
# Encode the query
query_vec = model.encode(query, normalize_embeddings=True)
# Convert embeddings to matrix
embeddings_matrix = np.vstack(df['embeddings'].values)
# Calculate cosine similarity
norms = np.linalg.norm(embeddings_matrix, axis=1) * np.linalg.norm(query_vec)
scores = np.dot(embeddings_matrix, query_vec) / norms
# Add scores and sort
df['score'] = scores
return df.sort_values(by='score', ascending=False).head(k)
# Example query
results = retrieve("How do I configure an nginx ingress controller?", df, k=3)
print(results[['content', 'score']])
```
### Building a RAG Pipeline
```python
from transformers import pipeline
# Load a question-answering model
qa_pipeline = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")
def rag_answer(question, df, k=3):
"""RAG: Retrieve relevant context and generate answer"""
# Retrieve relevant documents
context_rows = retrieve(question, df, k=k)
context_text = " ".join(context_rows['content'].tolist())
# Generate answer
result = qa_pipeline(question=question, context=context_text)
return result['answer'], context_rows
answer, sources = rag_answer("What is a Kubernetes pod?", df)
print(f"Answer: {answer}")
```
## π Dataset Statistics
```python
# Total chunks
print(f"Total chunks: {len(df)}")
# Average chunk length
df['chunk_length'] = df['content'].apply(lambda x: len(x.split()))
print(f"Average chunk length: {df['chunk_length'].mean():.0f} words")
# Embedding dimensionality
print(f"Embedding dimensions: {len(df['embeddings'].iloc[0])}")
```
## π Use Cases
- **RAG Applications**: Build retrieval-augmented generation systems
- **Semantic Search**: Find relevant documentation across multiple projects
- **Question Answering**: Create technical support chatbots
- **Documentation Assistant**: Help developers navigate complex documentation
- **Learning Resources**: Train models on high-quality technical content
- **Comparative Analysis**: Compare documentation approaches across projects
## π Performance Considerations
- **Pre-computed embeddings**: No need for runtime encoding
- **Optimized retrieval**: Matrix multiplication for fast cosine similarity
- **Parquet format**: Efficient storage and query performance
- **Chunk overlap**: Better context preservation across boundaries
## π οΈ Requirements
```txt
pandas>=2.0.0
numpy>=1.24.0
sentence-transformers>=2.0.0
pyarrow>=12.0.0
datasets>=2.0.0
```
## π License
This dataset is a compilation of documentation from various open-source projects. Each source maintains its original license:
- Most projects use Apache 2.0 or MIT licenses
- Refer to individual project repositories for specific licensing terms
## π€ Contributing
To add new sources or update existing documentation:
1. Add the source configuration to the `sites` list
2. Run the data collection pipeline
3. Verify content processing and embedding quality
4. Submit a pull request with updated dataset
## π§ Contact
For questions, issues, or suggestions, please open an issue on the GitHub repository or contact the maintainer.
## π Acknowledgments
Special thanks to all the open-source projects that maintain excellent documentation, making this dataset possible.
---
**Last Updated**: December 2025
**Version**: 1.0
**Embedding Model**: all-MiniLM-L6-v2
**Total Sources**: 25+
|