knowledge-base-docs / README.md
saidsef's picture
chore(data): update knowledge base docs dataset with 15084 entries
4a42c25 verified
metadata
dataset_info:
  features:
    - name: content
      dtype: large_string
    - name: url
      dtype: large_string
    - name: branch
      dtype: large_string
    - name: source
      dtype: large_string
    - name: embeddings
      list: float64
    - name: score
      dtype: float64
  splits:
    - name: train
      num_bytes: 103214272
      num_examples: 15084
  download_size: 57429042
  dataset_size: 103214272
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Knowledge Base Documentation Dataset

A comprehensive, pre-processed and vectorized dataset containing documentation from 25+ popular open-source projects and cloud platforms, optimized for Retrieval-Augmented Generation (RAG) applications.

πŸ“Š Dataset Overview

This dataset aggregates technical documentation from leading open-source projects across cloud-native, DevOps, machine learning, and infrastructure domains. Each document has been chunked and embedded using the all-MiniLM-L6-v2 sentence transformer model.

Dataset ID: saidsef/knowledge-base-docs

🎯 Sources

The dataset includes documentation from the following projects:

Source Domain File Types
kubernetes Container Orchestration Markdown
terraform Infrastructure as Code MDX
kustomize Kubernetes Configuration Markdown
ingress-nginx Kubernetes Ingress Markdown
helm Package Management Markdown
external-secrets Secrets Management Markdown
prometheus Monitoring Markdown
argo-cd GitOps Markdown
istio Service Mesh Markdown
scikit-learn Machine Learning RST
cilium Networking & Security RST
redis In-Memory Database Markdown
grafana Observability Markdown
docker Containerization Markdown
linux Operating System RST
ckad-exercises Kubernetes Certification Markdown
aws-eks-best-practices AWS EKS Markdown
gcp-professional-services Google Cloud Markdown
external-dns DNS Management Markdown
google-kubernetes-engine GKE Markdown
consul Service Mesh Markdown
vault Secrets Management MDX
tekton CI/CD Markdown
model-context-protocol-mcp AI Context Protocol Markdown

πŸ“‹ Dataset Schema

Each row in the dataset contains the following fields:

Field Type Description
content string Chunked text content (500 words with 50-word overlap)
original_id int/float Reference to the original document ID
embeddings list[float] 384-dimensional embedding vector from all-MiniLM-L6-v2

πŸ”§ Dataset Creation Process

1. Data Collection

  • Shallow clone of 25+ GitHub repositories
  • Extraction of documentation files (.md, .mdx, .rst)

2. Content Processing

  • Removal of YAML frontmatter
  • Conversion to LLM-friendly markdown format
  • Stripping of scripts, styles, and media elements
  • Code block preservation with proper formatting

3. Text Chunking

  • Chunk size: 500 words
  • Overlap: 50 words
  • Ensures semantic continuity across chunks

4. Vectorization

  • Model: all-MiniLM-L6-v2
  • Embedding dimensions: 384
  • Normalization: Enabled for cosine similarity
  • Pre-computed embeddings for fast retrieval

5. Storage Format

  • Format: Apache Parquet
  • Compression: Optimized for query performance
  • File: knowledge_base.parquet

πŸ’» Usage Examples

Loading the Dataset

import pandas as pd
from datasets import load_dataset

# From Hugging Face Hub
dataset = load_dataset("saidsef/knowledge-base-docs")
df = dataset['train'].to_pandas()

# From local Parquet file
df = pd.read_parquet("knowledge_base.parquet", engine="pyarrow")

Semantic Search / RAG Implementation

import numpy as np
from sentence_transformers import SentenceTransformer

# Load the same model used for embedding
model = SentenceTransformer('all-MiniLM-L6-v2', trust_remote_code=True)

def retrieve(query, df, k=5):
    """Retrieve top-k most relevant documents using cosine similarity"""
    # Encode the query
    query_vec = model.encode(query, normalize_embeddings=True)
    
    # Convert embeddings to matrix
    embeddings_matrix = np.vstack(df['embeddings'].values)
    
    # Calculate cosine similarity
    norms = np.linalg.norm(embeddings_matrix, axis=1) * np.linalg.norm(query_vec)
    scores = np.dot(embeddings_matrix, query_vec) / norms
    
    # Add scores and sort
    df['score'] = scores
    return df.sort_values(by='score', ascending=False).head(k)

# Example query
results = retrieve("How do I configure an nginx ingress controller?", df, k=3)
print(results[['content', 'score']])

Building a RAG Pipeline

from transformers import pipeline

# Load a question-answering model
qa_pipeline = pipeline("question-answering", model="distilbert-base-cased-distilled-squad")

def rag_answer(question, df, k=3):
    """RAG: Retrieve relevant context and generate answer"""
    # Retrieve relevant documents
    context_rows = retrieve(question, df, k=k)
    context_text = " ".join(context_rows['content'].tolist())
    
    # Generate answer
    result = qa_pipeline(question=question, context=context_text)
    return result['answer'], context_rows

answer, sources = rag_answer("What is a Kubernetes pod?", df)
print(f"Answer: {answer}")

πŸ“ˆ Dataset Statistics

# Total chunks
print(f"Total chunks: {len(df)}")

# Average chunk length
df['chunk_length'] = df['content'].apply(lambda x: len(x.split()))
print(f"Average chunk length: {df['chunk_length'].mean():.0f} words")

# Embedding dimensionality
print(f"Embedding dimensions: {len(df['embeddings'].iloc[0])}")

πŸš€ Use Cases

  • RAG Applications: Build retrieval-augmented generation systems
  • Semantic Search: Find relevant documentation across multiple projects
  • Question Answering: Create technical support chatbots
  • Documentation Assistant: Help developers navigate complex documentation
  • Learning Resources: Train models on high-quality technical content
  • Comparative Analysis: Compare documentation approaches across projects

πŸ” Performance Considerations

  • Pre-computed embeddings: No need for runtime encoding
  • Optimized retrieval: Matrix multiplication for fast cosine similarity
  • Parquet format: Efficient storage and query performance
  • Chunk overlap: Better context preservation across boundaries

πŸ› οΈ Requirements

pandas>=2.0.0
numpy>=1.24.0
sentence-transformers>=2.0.0
pyarrow>=12.0.0
datasets>=2.0.0

πŸ“ License

This dataset is a compilation of documentation from various open-source projects. Each source maintains its original license:

  • Most projects use Apache 2.0 or MIT licenses
  • Refer to individual project repositories for specific licensing terms

🀝 Contributing

To add new sources or update existing documentation:

  1. Add the source configuration to the sites list
  2. Run the data collection pipeline
  3. Verify content processing and embedding quality
  4. Submit a pull request with updated dataset

πŸ“§ Contact

For questions, issues, or suggestions, please open an issue on the GitHub repository or contact the maintainer.

πŸ™ Acknowledgments

Special thanks to all the open-source projects that maintain excellent documentation, making this dataset possible.


Last Updated: December 2025
Version: 1.0
Embedding Model: all-MiniLM-L6-v2
Total Sources: 25+