Spaces:
Sleeping
Sleeping
File size: 6,658 Bytes
8e0dd55 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 | # 🚀 DeepWiki API
This is the backend API for DeepWiki, providing smart code analysis and AI-powered documentation generation.
## ✨ Features
- **Streaming AI Responses**: Real-time responses using Google's Generative AI (Gemini)
- **Smart Code Analysis**: Automatically analyzes GitHub repositories
- **RAG Implementation**: Retrieval Augmented Generation for context-aware responses
- **Local Storage**: All data stored locally - no cloud dependencies
- **Conversation History**: Maintains context across multiple questions
## 🔧 Quick Setup
### Step 1: Install Dependencies
```bash
# From the project root
pip install -r api/requirements.txt
```
### Step 2: Set Up Environment Variables
Create a `.env` file in the project root:
```
# Required API Keys
GOOGLE_API_KEY=your_google_api_key # Required for Google Gemini models
OPENAI_API_KEY=your_openai_api_key # Required for embeddings and OpenAI models
# Optional API Keys
OPENROUTER_API_KEY=your_openrouter_api_key # Required only if using OpenRouter models
# AWS Bedrock Configuration
AWS_ACCESS_KEY_ID=your_aws_access_key_id # Required for AWS Bedrock models
AWS_SECRET_ACCESS_KEY=your_aws_secret_key # Required for AWS Bedrock models
AWS_REGION=us-east-1 # Optional, defaults to us-east-1
AWS_ROLE_ARN=your_aws_role_arn # Optional, for role-based authentication
# OpenAI API Configuration
OPENAI_BASE_URL=https://custom-api-endpoint.com/v1 # Optional, for custom OpenAI API endpoints
# Ollama host
OLLAMA_HOST=https://your_ollama_host" # Optional: Add Ollama host if not local. default: http://localhost:11434
# Server Configuration
PORT=8001 # Optional, defaults to 8001
```
If you're not using Ollama mode, you need to configure an OpenAI API key for embeddings. Other API keys are only required when configuring and using models from the corresponding providers.
> 💡 **Where to get these keys:**
> - Get a Google API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
> - Get an OpenAI API key from [OpenAI Platform](https://platform.openai.com/api-keys)
> - Get an OpenRouter API key from [OpenRouter](https://openrouter.ai/keys)
> - Get AWS credentials from [AWS IAM Console](https://console.aws.amazon.com/iam/)
#### Advanced Environment Configuration
##### Provider-Based Model Selection
DeepWiki supports multiple LLM providers. The environment variables above are required depending on which providers you want to use:
- **Google Gemini**: Requires `GOOGLE_API_KEY`
- **OpenAI**: Requires `OPENAI_API_KEY`
- **OpenRouter**: Requires `OPENROUTER_API_KEY`
- **AWS Bedrock**: Requires `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`
- **Ollama**: No API key required (runs locally)
##### Custom OpenAI API Endpoints
The `OPENAI_BASE_URL` variable allows you to specify a custom endpoint for the OpenAI API. This is useful for:
- Enterprise users with private API channels
- Organizations using self-hosted or custom-deployed LLM services
- Integration with third-party OpenAI API-compatible services
**Example:** you can use the endpoint which support the OpenAI protocol provided by any organization
```
OPENAI_BASE_URL=https://custom-openai-endpoint.com/v1
```
##### Configuration Files
DeepWiki now uses JSON configuration files to manage various system components instead of hardcoded values:
1. **`generator.json`**: Configuration for text generation models
- Located in `api/config/` by default
- Defines available model providers (Google, OpenAI, OpenRouter, AWS Bedrock, Ollama)
- Specifies default and available models for each provider
- Contains model-specific parameters like temperature and top_p
2. **`embedder.json`**: Configuration for embedding models and text processing
- Located in `api/config/` by default
- Defines embedding models for vector storage
- Contains retriever configuration for RAG
- Specifies text splitter settings for document chunking
3. **`repo.json`**: Configuration for repository handling
- Located in `api/config/` by default
- Contains file filters to exclude certain files and directories
- Defines repository size limits and processing rules
You can customize the configuration directory location using the environment variable:
```
DEEPWIKI_CONFIG_DIR=/path/to/custom/config/dir # Optional, for custom config file location
```
This allows you to maintain different configurations for various environments or deployment scenarios without modifying the code.
### Step 3: Start the API Server
```bash
# From the project root
python -m api.main
```
The API will be available at `http://localhost:8001`
## 🧠 How It Works
### 1. Repository Indexing
When you provide a GitHub repository URL, the API:
- Clones the repository locally (if not already cloned)
- Reads all files in the repository
- Creates embeddings for the files using OpenAI
- Stores the embeddings in a local database
### 2. Smart Retrieval (RAG)
When you ask a question:
- The API finds the most relevant code snippets
- These snippets are used as context for the AI
- The AI generates a response based on this context
### 3. Real-Time Streaming
- Responses are streamed in real-time
- You see the answer as it's being generated
- This creates a more interactive experience
## 📡 API Endpoints
### GET /
Returns basic API information and available endpoints.
### POST /chat/completions/stream
Streams an AI-generated response about a GitHub repository.
**Request Body:**
```json
{
"repo_url": "https://github.com/username/repo",
"messages": [
{
"role": "user",
"content": "What does this repository do?"
}
],
"filePath": "optional/path/to/file.py" // Optional
}
```
**Response:**
A streaming response with the generated text.
## 📝 Example Code
```python
import requests
# API endpoint
url = "http://localhost:8001/chat/completions/stream"
# Request data
payload = {
"repo_url": "https://github.com/AsyncFuncAI/deepwiki-open",
"messages": [
{
"role": "user",
"content": "Explain how React components work"
}
]
}
# Make streaming request
response = requests.post(url, json=payload, stream=True)
# Process the streaming response
for chunk in response.iter_content(chunk_size=None):
if chunk:
print(chunk.decode('utf-8'), end='', flush=True)
```
## 💾 Storage
All data is stored locally on your machine:
- Cloned repositories: `~/.adalflow/repos/`
- Embeddings and indexes: `~/.adalflow/databases/`
- Generated wiki cache: `~/.adalflow/wikicache/`
No cloud storage is used - everything runs on your computer!
|