bhavinmatariya commited on
Commit
395fb5a
·
verified ·
1 Parent(s): 9fbffb9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -559
README.md CHANGED
@@ -1,5 +1,3 @@
1
- # DeepWiki-Open
2
-
3
  ---
4
  title: DeepWiki Open
5
  emoji: 🧠
@@ -8,560 +6,4 @@ colorTo: purple
8
  sdk: docker
9
  app_port: 3000
10
  pinned: false
11
- ---
12
-
13
- ![DeepWiki Banner](screenshots/Deepwiki.png)
14
-
15
- **DeepWiki** is my own implementation attempt of DeepWiki, automatically creates beautiful, interactive wikis for any GitHub, GitLab, or BitBucket repository! Just enter a repo name, and DeepWiki will:
16
-
17
- 1. Analyze the code structure
18
- 2. Generate comprehensive documentation
19
- 3. Create visual diagrams to explain how everything works
20
- 4. Organize it all into an easy-to-navigate wiki
21
-
22
- [!["Buy Me A Coffee"](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://buymeacoffee.com/sheing)
23
- [![Tip in Crypto](https://tip.md/badge.svg)](https://tip.md/sng-asyncfunc)
24
- [![Twitter/X](https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white)](https://x.com/sashimikun_void)
25
- [![Discord](https://img.shields.io/badge/Discord-7289DA?style=for-the-badge&logo=discord&logoColor=white)](https://discord.com/invite/VQMBGR8u5v)
26
-
27
- [English](./README.md) | [简体中文](./README.zh.md) | [繁體中文](./README.zh-tw.md) | [日本語](./README.ja.md) | [Español](./README.es.md) | [한국어](./README.kr.md) | [Tiếng Việt](./README.vi.md) | [Português Brasileiro](./README.pt-br.md) | [Français](./README.fr.md) | [Русский](./README.ru.md)
28
-
29
- ## ✨ Features
30
-
31
- - **Instant Documentation**: Turn any GitHub, GitLab or BitBucket repo into a wiki in seconds
32
- - **Private Repository Support**: Securely access private repositories with personal access tokens
33
- - **Smart Analysis**: AI-powered understanding of code structure and relationships
34
- - **Beautiful Diagrams**: Automatic Mermaid diagrams to visualize architecture and data flow
35
- - **Easy Navigation**: Simple, intuitive interface to explore the wiki
36
- - **Ask Feature**: Chat with your repository using RAG-powered AI to get accurate answers
37
- - **DeepResearch**: Multi-turn research process that thoroughly investigates complex topics
38
- - **Multiple Model Providers**: Support for Google Gemini, OpenAI, OpenRouter, and local Ollama models
39
-
40
- ## 🚀 Quick Start (Super Easy!)
41
-
42
- ### Option 1: Using Docker
43
-
44
- ```bash
45
- # Clone the repository
46
- git clone https://github.com/AsyncFuncAI/deepwiki-open.git
47
- cd deepwiki-open
48
-
49
- # Create a .env file with your API keys
50
- echo "GOOGLE_API_KEY=your_google_api_key" > .env
51
- echo "OPENAI_API_KEY=your_openai_api_key" >> .env
52
- # Optional: Add OpenRouter API key if you want to use OpenRouter models
53
- echo "OPENROUTER_API_KEY=your_openrouter_api_key" >> .env
54
- # Optional: Add Ollama host if not local. defaults to http://localhost:11434
55
- echo "OLLAMA_HOST=your_ollama_host" >> .env
56
- # Optional: Add Azure API key, endpoint and version if you want to use azure openai models
57
- echo "AZURE_OPENAI_API_KEY=your_azure_openai_api_key" >> .env
58
- echo "AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint" >> .env
59
- echo "AZURE_OPENAI_VERSION=your_azure_openai_version" >> .env
60
- # Run with Docker Compose
61
- docker-compose up
62
- ```
63
-
64
- For detailed instructions on using DeepWiki with Ollama and Docker, see [Ollama Instructions](Ollama-instruction.md).
65
-
66
- > 💡 **Where to get these keys:**
67
- > - Get a Google API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
68
- > - Get an OpenAI API key from [OpenAI Platform](https://platform.openai.com/api-keys)
69
- > - Get Azure OpenAI credentials from [Azure Portal](https://portal.azure.com/) - create an Azure OpenAI resource and get the API key, endpoint, and API version
70
-
71
- ### Option 2: Manual Setup (Recommended)
72
-
73
- #### Step 1: Set Up Your API Keys
74
-
75
- Create a `.env` file in the project root with these keys:
76
-
77
- ```
78
- GOOGLE_API_KEY=your_google_api_key
79
- OPENAI_API_KEY=your_openai_api_key
80
- # Optional: Add this if you want to use OpenRouter models
81
- OPENROUTER_API_KEY=your_openrouter_api_key
82
- # Optional: Add this if you want to use Azure OpenAI models
83
- AZURE_OPENAI_API_KEY=your_azure_openai_api_key
84
- AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint
85
- AZURE_OPENAI_VERSION=your_azure_openai_version
86
- # Optional: Add Ollama host if not local. default: http://localhost:11434
87
- OLLAMA_HOST=your_ollama_host
88
- ```
89
-
90
- #### Step 2: Start the Backend
91
-
92
- ```bash
93
- # Install Python dependencies
94
- pip install -r api/requirements.txt
95
-
96
- # Start the API server
97
- python -m api.main
98
- ```
99
-
100
- #### Step 3: Start the Frontend
101
-
102
- ```bash
103
- # Install JavaScript dependencies
104
- npm install
105
- # or
106
- yarn install
107
-
108
- # Start the web app
109
- npm run dev
110
- # or
111
- yarn dev
112
- ```
113
-
114
- #### Step 4: Use DeepWiki!
115
-
116
- 1. Open [http://localhost:3000](http://localhost:3000) in your browser
117
- 2. Enter a GitHub, GitLab, or Bitbucket repository (like `https://github.com/openai/codex`, `https://github.com/microsoft/autogen`, `https://gitlab.com/gitlab-org/gitlab`, or `https://bitbucket.org/redradish/atlassian_app_versions`)
118
- 3. For private repositories, click "+ Add access tokens" and enter your GitHub or GitLab personal access token
119
- 4. Click "Generate Wiki" and watch the magic happen!
120
-
121
- ## 🔍 How It Works
122
-
123
- DeepWiki uses AI to:
124
-
125
- 1. Clone and analyze the GitHub, GitLab, or Bitbucket repository (including private repos with token authentication)
126
- 2. Create embeddings of the code for smart retrieval
127
- 3. Generate documentation with context-aware AI (using Google Gemini, OpenAI, OpenRouter, Azure OpenAI, or local Ollama models)
128
- 4. Create visual diagrams to explain code relationships
129
- 5. Organize everything into a structured wiki
130
- 6. Enable intelligent Q&A with the repository through the Ask feature
131
- 7. Provide in-depth research capabilities with DeepResearch
132
-
133
- ```mermaid
134
- graph TD
135
- A[User inputs GitHub/GitLab/Bitbucket repo] --> AA{Private repo?}
136
- AA -->|Yes| AB[Add access token]
137
- AA -->|No| B[Clone Repository]
138
- AB --> B
139
- B --> C[Analyze Code Structure]
140
- C --> D[Create Code Embeddings]
141
-
142
- D --> M{Select Model Provider}
143
- M -->|Google Gemini| E1[Generate with Gemini]
144
- M -->|OpenAI| E2[Generate with OpenAI]
145
- M -->|OpenRouter| E3[Generate with OpenRouter]
146
- M -->|Local Ollama| E4[Generate with Ollama]
147
- M -->|Azure| E5[Generate with Azure]
148
-
149
- E1 --> E[Generate Documentation]
150
- E2 --> E
151
- E3 --> E
152
- E4 --> E
153
- E5 --> E
154
-
155
- D --> F[Create Visual Diagrams]
156
- E --> G[Organize as Wiki]
157
- F --> G
158
- G --> H[Interactive DeepWiki]
159
-
160
- classDef process stroke-width:2px;
161
- classDef data stroke-width:2px;
162
- classDef result stroke-width:2px;
163
- classDef decision stroke-width:2px;
164
-
165
- class A,D data;
166
- class AA,M decision;
167
- class B,C,E,F,G,AB,E1,E2,E3,E4,E5 process;
168
- class H result;
169
- ```
170
-
171
- ## 🛠️ Project Structure
172
-
173
- ```
174
- deepwiki/
175
- ├── api/ # Backend API server
176
- │ ├── main.py # API entry point
177
- │ ├── api.py # FastAPI implementation
178
- │ ├── rag.py # Retrieval Augmented Generation
179
- │ ├── data_pipeline.py # Data processing utilities
180
- │ └── requirements.txt # Python dependencies
181
-
182
- ├── src/ # Frontend Next.js app
183
- │ ├── app/ # Next.js app directory
184
- │ │ └── page.tsx # Main application page
185
- │ └── components/ # React components
186
- │ └── Mermaid.tsx # Mermaid diagram renderer
187
-
188
- ├── public/ # Static assets
189
- ├── package.json # JavaScript dependencies
190
- └── .env # Environment variables (create this)
191
- ```
192
-
193
- ## 🤖 Provider-Based Model Selection System
194
-
195
- DeepWiki now implements a flexible provider-based model selection system supporting multiple LLM providers:
196
-
197
- ### Supported Providers and Models
198
-
199
- - **Google**: Default `gemini-2.5-flash`, also supports `gemini-2.5-flash-lite`, `gemini-2.5-pro`, etc.
200
- - **OpenAI**: Default `gpt-5-nano`, also supports `gpt-5`, `4o`, etc.
201
- - **OpenRouter**: Access to multiple models via a unified API, including Claude, Llama, Mistral, etc.
202
- - **Azure OpenAI**: Default `gpt-4o`, also supports `o4-mini`, etc.
203
- - **Ollama**: Support for locally running open-source models like `llama3`
204
-
205
- ### Environment Variables
206
-
207
- Each provider requires its corresponding API key environment variables:
208
-
209
- ```
210
- # API Keys
211
- GOOGLE_API_KEY=your_google_api_key # Required for Google Gemini models
212
- OPENAI_API_KEY=your_openai_api_key # Required for OpenAI models
213
- OPENROUTER_API_KEY=your_openrouter_api_key # Required for OpenRouter models
214
- AZURE_OPENAI_API_KEY=your_azure_openai_api_key #Required for Azure OpenAI models
215
- AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint #Required for Azure OpenAI models
216
- AZURE_OPENAI_VERSION=your_azure_openai_version #Required for Azure OpenAI models
217
-
218
- # OpenAI API Base URL Configuration
219
- OPENAI_BASE_URL=https://custom-api-endpoint.com/v1 # Optional, for custom OpenAI API endpoints
220
-
221
- # Ollama host
222
- OLLAMA_HOST=your_ollama_host # Optional, if Ollama is not local. default: http://localhost:11434
223
-
224
- # Configuration Directory
225
- DEEPWIKI_CONFIG_DIR=/path/to/custom/config/dir # Optional, for custom config file location
226
- ```
227
-
228
- ### Configuration Files
229
-
230
- DeepWiki uses JSON configuration files to manage various aspects of the system:
231
-
232
- 1. **`generator.json`**: Configuration for text generation models
233
- - Defines available model providers (Google, OpenAI, OpenRouter, Azure, Ollama)
234
- - Specifies default and available models for each provider
235
- - Contains model-specific parameters like temperature and top_p
236
-
237
- 2. **`embedder.json`**: Configuration for embedding models and text processing
238
- - Defines embedding models for vector storage
239
- - Contains retriever configuration for RAG
240
- - Specifies text splitter settings for document chunking
241
-
242
- 3. **`repo.json`**: Configuration for repository handling
243
- - Contains file filters to exclude certain files and directories
244
- - Defines repository size limits and processing rules
245
-
246
- By default, these files are located in the `api/config/` directory. You can customize their location using the `DEEPWIKI_CONFIG_DIR` environment variable.
247
-
248
- ### Custom Model Selection for Service Providers
249
-
250
- The custom model selection feature is specifically designed for service providers who need to:
251
-
252
- - You can offer multiple AI model choices to users within your organization
253
- - You can quickly adapt to the rapidly evolving LLM landscape without code changes
254
- - You can support specialized or fine-tuned models that aren't in the predefined list
255
-
256
- Service providers can implement their model offerings by selecting from the predefined options or entering custom model identifiers in the frontend interface.
257
-
258
- ### Base URL Configuration for Enterprise Private Channels
259
-
260
- The OpenAI Client's base_url configuration is designed primarily for enterprise users with private API channels. This feature:
261
-
262
- - Enables connection to private or enterprise-specific API endpoints
263
- - Allows organizations to use their own self-hosted or custom-deployed LLM services
264
- - Supports integration with third-party OpenAI API-compatible services
265
-
266
- **Coming Soon**: In future updates, DeepWiki will support a mode where users need to provide their own API keys in requests. This will allow enterprise customers with private channels to use their existing API arrangements without sharing credentials with the DeepWiki deployment.
267
-
268
- ## 🧩 Using OpenAI-Compatible Embedding Models (e.g., Alibaba Qwen)
269
-
270
- If you want to use embedding models compatible with the OpenAI API (such as Alibaba Qwen), follow these steps:
271
-
272
- 1. Replace the contents of `api/config/embedder.json` with those from `api/config/embedder_openai_compatible.json`.
273
- 2. In your project root `.env` file, set the relevant environment variables, for example:
274
- ```
275
- OPENAI_API_KEY=your_api_key
276
- OPENAI_BASE_URL=your_openai_compatible_endpoint
277
- ```
278
- 3. The program will automatically substitute placeholders in embedder.json with the values from your environment variables.
279
-
280
- This allows you to seamlessly switch to any OpenAI-compatible embedding service without code changes.
281
-
282
- ### Logging
283
-
284
- DeepWiki uses Python's built-in `logging` module for diagnostic output. You can configure the verbosity and log file destination via environment variables:
285
-
286
- | Variable | Description | Default |
287
- |-----------------|--------------------------------------------------------------------|------------------------------|
288
- | `LOG_LEVEL` | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL). | INFO |
289
- | `LOG_FILE_PATH` | Path to the log file. If set, logs will be written to this file. | `api/logs/application.log` |
290
-
291
- To enable debug logging and direct logs to a custom file:
292
- ```bash
293
- export LOG_LEVEL=DEBUG
294
- export LOG_FILE_PATH=./debug.log
295
- python -m api.main
296
- ```
297
- Or with Docker Compose:
298
- ```bash
299
- LOG_LEVEL=DEBUG LOG_FILE_PATH=./debug.log docker-compose up
300
- ```
301
-
302
- When running with Docker Compose, the container's `api/logs` directory is bind-mounted to `./api/logs` on your host (see the `volumes` section in `docker-compose.yml`), ensuring log files persist across restarts.
303
-
304
- Alternatively, you can store these settings in your `.env` file:
305
-
306
- ```bash
307
- LOG_LEVEL=DEBUG
308
- LOG_FILE_PATH=./debug.log
309
- ```
310
- Then simply run:
311
-
312
- ```bash
313
- docker-compose up
314
- ```
315
-
316
- **Logging Path Security Considerations:** In production environments, ensure the `api/logs` directory and any custom log file path are secured with appropriate filesystem permissions and access controls. The application enforces that `LOG_FILE_PATH` resides within the project's `api/logs` directory to prevent path traversal or unauthorized writes.
317
-
318
- ## 🛠️ Advanced Setup
319
-
320
- ### Environment Variables
321
-
322
- | Variable | Description | Required | Note |
323
- |----------------------|--------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------|
324
- | `GOOGLE_API_KEY` | Google Gemini API key for AI generation | No | Required only if you want to use Google Gemini models
325
- | `OPENAI_API_KEY` | OpenAI API key for embeddings | Yes | Note: This is required even if you're not using OpenAI models, as it's used for embeddings. |
326
- | `OPENROUTER_API_KEY` | OpenRouter API key for alternative models | No | Required only if you want to use OpenRouter models |
327
- | `AZURE_OPENAI_API_KEY` | Azure OpenAI API key | No | Required only if you want to use Azure OpenAI models |
328
- | `AZURE_OPENAI_ENDPOINT` | Azure OpenAI endpoint | No | Required only if you want to use Azure OpenAI models |
329
- | `AZURE_OPENAI_VERSION` | Azure OpenAI version | No | Required only if you want to use Azure OpenAI models |
330
- | `OLLAMA_HOST` | Ollama Host (default: http://localhost:11434) | No | Required only if you want to use external Ollama server |
331
- | `PORT` | Port for the API server (default: 8001) | No | If you host API and frontend on the same machine, make sure change port of `SERVER_BASE_URL` accordingly |
332
- | `SERVER_BASE_URL` | Base URL for the API server (default: http://localhost:8001) | No |
333
- | `DEEPWIKI_AUTH_MODE` | Set to `true` or `1` to enable authorization mode. | No | Defaults to `false`. If enabled, `DEEPWIKI_AUTH_CODE` is required. |
334
- | `DEEPWIKI_AUTH_CODE` | The secret code required for wiki generation when `DEEPWIKI_AUTH_MODE` is enabled. | No | Only used if `DEEPWIKI_AUTH_MODE` is `true` or `1`. |
335
-
336
- If you're not using ollama mode, you need to configure an OpenAI API key for embeddings. Other API keys are only required when configuring and using models from the corresponding providers.
337
-
338
- ## Authorization Mode
339
-
340
- DeepWiki can be configured to run in an authorization mode, where wiki generation requires a valid authorization code. This is useful if you want to control who can use the generation feature.
341
- Restricts frontend initiation and protects cache deletion, but doesn't fully prevent backend generation if API endpoints are hit directly.
342
-
343
- To enable authorization mode, set the following environment variables:
344
-
345
- - `DEEPWIKI_AUTH_MODE`: Set this to `true` or `1`. When enabled, the frontend will display an input field for the authorization code.
346
- - `DEEPWIKI_AUTH_CODE`: Set this to the desired secret code. Restricts frontend initiation and protects cache deletion, but doesn't fully prevent backend generation if API endpoints are hit directly.
347
-
348
- If `DEEPWIKI_AUTH_MODE` is not set or is set to `false` (or any other value than `true`/`1`), the authorization feature will be disabled, and no code will be required.
349
-
350
- ### Docker Setup
351
-
352
- You can use Docker to run DeepWiki:
353
-
354
- #### Running the Container
355
-
356
- ```bash
357
- # Pull the image from GitHub Container Registry
358
- docker pull ghcr.io/asyncfuncai/deepwiki-open:latest
359
-
360
- # Run the container with environment variables
361
- docker run -p 8001:8001 -p 3000:3000 \
362
- -e GOOGLE_API_KEY=your_google_api_key \
363
- -e OPENAI_API_KEY=your_openai_api_key \
364
- -e OPENROUTER_API_KEY=your_openrouter_api_key \
365
- -e OLLAMA_HOST=your_ollama_host \
366
- -e AZURE_OPENAI_API_KEY=your_azure_openai_api_key \
367
- -e AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint \
368
- -e AZURE_OPENAI_VERSION=your_azure_openai_version \
369
-
370
- -v ~/.adalflow:/root/.adalflow \
371
- ghcr.io/asyncfuncai/deepwiki-open:latest
372
- ```
373
-
374
- This command also mounts `~/.adalflow` on your host to `/root/.adalflow` in the container. This path is used to store:
375
- - Cloned repositories (`~/.adalflow/repos/`)
376
- - Their embeddings and indexes (`~/.adalflow/databases/`)
377
- - Cached generated wiki content (`~/.adalflow/wikicache/`)
378
-
379
- This ensures that your data persists even if the container is stopped or removed.
380
-
381
- Or use the provided `docker-compose.yml` file:
382
-
383
- ```bash
384
- # Edit the .env file with your API keys first
385
- docker-compose up
386
- ```
387
-
388
- (The `docker-compose.yml` file is pre-configured to mount `~/.adalflow` for data persistence, similar to the `docker run` command above.)
389
-
390
- #### Using a .env file with Docker
391
-
392
- You can also mount a .env file to the container:
393
-
394
- ```bash
395
- # Create a .env file with your API keys
396
- echo "GOOGLE_API_KEY=your_google_api_key" > .env
397
- echo "OPENAI_API_KEY=your_openai_api_key" >> .env
398
- echo "OPENROUTER_API_KEY=your_openrouter_api_key" >> .env
399
- echo "AZURE_OPENAI_API_KEY=your_azure_openai_api_key" >> .env
400
- echo "AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint" >> .env
401
- echo "AZURE_OPENAI_VERSION=your_azure_openai_version" >>.env
402
- echo "OLLAMA_HOST=your_ollama_host" >> .env
403
-
404
- # Run the container with the .env file mounted
405
- docker run -p 8001:8001 -p 3000:3000 \
406
- -v $(pwd)/.env:/app/.env \
407
- -v ~/.adalflow:/root/.adalflow \
408
- ghcr.io/asyncfuncai/deepwiki-open:latest
409
- ```
410
-
411
- This command also mounts `~/.adalflow` on your host to `/root/.adalflow` in the container. This path is used to store:
412
- - Cloned repositories (`~/.adalflow/repos/`)
413
- - Their embeddings and indexes (`~/.adalflow/databases/`)
414
- - Cached generated wiki content (`~/.adalflow/wikicache/`)
415
-
416
- This ensures that your data persists even if the container is stopped or removed.
417
-
418
- #### Building the Docker image locally
419
-
420
- If you want to build the Docker image locally:
421
-
422
- ```bash
423
- # Clone the repository
424
- git clone https://github.com/AsyncFuncAI/deepwiki-open.git
425
- cd deepwiki-open
426
-
427
- # Build the Docker image
428
- docker build -t deepwiki-open .
429
-
430
- # Run the container
431
- docker run -p 8001:8001 -p 3000:3000 \
432
- -e GOOGLE_API_KEY=your_google_api_key \
433
- -e OPENAI_API_KEY=your_openai_api_key \
434
- -e OPENROUTER_API_KEY=your_openrouter_api_key \
435
- -e AZURE_OPENAI_API_KEY=your_azure_openai_api_key \
436
- -e AZURE_OPENAI_ENDPOINT=your_azure_openai_endpoint \
437
- -e AZURE_OPENAI_VERSION=your_azure_openai_version \
438
- -e OLLAMA_HOST=your_ollama_host \
439
- deepwiki-open
440
- ```
441
-
442
- #### Using Self-Signed Certificates in Docker
443
-
444
- If you're in an environment that uses self-signed certificates, you can include them in the Docker build:
445
-
446
- 1. Create a directory for your certificates (default is `certs` in your project root)
447
- 2. Copy your `.crt` or `.pem` certificate files into this directory
448
- 3. Build the Docker image:
449
-
450
- ```bash
451
- # Build with default certificates directory (certs)
452
- docker build .
453
-
454
- # Or build with a custom certificates directory
455
- docker build --build-arg CUSTOM_CERT_DIR=my-custom-certs .
456
- ```
457
-
458
- ### API Server Details
459
-
460
- The API server provides:
461
- - Repository cloning and indexing
462
- - RAG (Retrieval Augmented Generation)
463
- - Streaming chat completions
464
-
465
- For more details, see the [API README](./api/README.md).
466
-
467
- ## 🔌 OpenRouter Integration
468
-
469
- DeepWiki now supports [OpenRouter](https://openrouter.ai/) as a model provider, giving you access to hundreds of AI models through a single API:
470
-
471
- - **Multiple Model Options**: Access models from OpenAI, Anthropic, Google, Meta, Mistral, and more
472
- - **Simple Configuration**: Just add your OpenRouter API key and select the model you want to use
473
- - **Cost Efficiency**: Choose models that fit your budget and performance needs
474
- - **Easy Switching**: Toggle between different models without changing your code
475
-
476
- ### How to Use OpenRouter with DeepWiki
477
-
478
- 1. **Get an API Key**: Sign up at [OpenRouter](https://openrouter.ai/) and get your API key
479
- 2. **Add to Environment**: Add `OPENROUTER_API_KEY=your_key` to your `.env` file
480
- 3. **Enable in UI**: Check the "Use OpenRouter API" option on the homepage
481
- 4. **Select Model**: Choose from popular models like GPT-4o, Claude 3.5 Sonnet, Gemini 2.0, and more
482
-
483
- OpenRouter is particularly useful if you want to:
484
- - Try different models without signing up for multiple services
485
- - Access models that might be restricted in your region
486
- - Compare performance across different model providers
487
- - Optimize for cost vs. performance based on your needs
488
-
489
- ## 🤖 Ask & DeepResearch Features
490
-
491
- ### Ask Feature
492
-
493
- The Ask feature allows you to chat with your repository using Retrieval Augmented Generation (RAG):
494
-
495
- - **Context-Aware Responses**: Get accurate answers based on the actual code in your repository
496
- - **RAG-Powered**: The system retrieves relevant code snippets to provide grounded responses
497
- - **Real-Time Streaming**: See responses as they're generated for a more interactive experience
498
- - **Conversation History**: The system maintains context between questions for more coherent interactions
499
-
500
- ### DeepResearch Feature
501
-
502
- DeepResearch takes repository analysis to the next level with a multi-turn research process:
503
-
504
- - **In-Depth Investigation**: Thoroughly explores complex topics through multiple research iterations
505
- - **Structured Process**: Follows a clear research plan with updates and a comprehensive conclusion
506
- - **Automatic Continuation**: The AI automatically continues research until reaching a conclusion (up to 5 iterations)
507
- - **Research Stages**:
508
- 1. **Research Plan**: Outlines the approach and initial findings
509
- 2. **Research Updates**: Builds on previous iterations with new insights
510
- 3. **Final Conclusion**: Provides a comprehensive answer based on all iterations
511
-
512
- To use DeepResearch, simply toggle the "Deep Research" switch in the Ask interface before submitting your question.
513
-
514
- ## 📱 Screenshots
515
-
516
- ![DeepWiki Main Interface](screenshots/Interface.png)
517
- *The main interface of DeepWiki*
518
-
519
- ![Private Repository Support](screenshots/privaterepo.png)
520
- *Access private repositories with personal access tokens*
521
-
522
- ![DeepResearch Feature](screenshots/DeepResearch.png)
523
- *DeepResearch conducts multi-turn investigations for complex topics*
524
-
525
- ### Demo Video
526
-
527
- [![DeepWiki Demo Video](https://img.youtube.com/vi/zGANs8US8B4/0.jpg)](https://youtu.be/zGANs8US8B4)
528
-
529
- *Watch DeepWiki in action!*
530
-
531
- ## ❓ Troubleshooting
532
-
533
- ### API Key Issues
534
- - **"Missing environment variables"**: Make sure your `.env` file is in the project root and contains the required API keys
535
- - **"API key not valid"**: Check that you've copied the full key correctly with no extra spaces
536
- - **"OpenRouter API error"**: Verify your OpenRouter API key is valid and has sufficient credits
537
- - **"Azure OpenAI API error"**: Verify your Azure OpenAI credentials (API key, endpoint, and version) are correct and the service is properly deployed
538
-
539
- ### Connection Problems
540
- - **"Cannot connect to API server"**: Make sure the API server is running on port 8001
541
- - **"CORS error"**: The API is configured to allow all origins, but if you're having issues, try running both frontend and backend on the same machine
542
-
543
- ### Generation Issues
544
- - **"Error generating wiki"**: For very large repositories, try a smaller one first
545
- - **"Invalid repository format"**: Make sure you're using a valid GitHub, GitLab or Bitbucket URL format
546
- - **"Could not fetch repository structure"**: For private repositories, ensure you've entered a valid personal access token with appropriate permissions
547
- - **"Diagram rendering error"**: The app will automatically try to fix broken diagrams
548
-
549
- ### Common Solutions
550
- 1. **Restart both servers**: Sometimes a simple restart fixes most issues
551
- 2. **Check console logs**: Open browser developer tools to see any JavaScript errors
552
- 3. **Check API logs**: Look at the terminal where the API is running for Python errors
553
-
554
- ## 🤝 Contributing
555
-
556
- Contributions are welcome! Feel free to:
557
- - Open issues for bugs or feature requests
558
- - Submit pull requests to improve the code
559
- - Share your feedback and ideas
560
-
561
- ## 📄 License
562
-
563
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
564
-
565
- ## ⭐ Star History
566
-
567
- [![Star History Chart](https://api.star-history.com/svg?repos=AsyncFuncAI/deepwiki-open&type=Date)](https://star-history.com/#AsyncFuncAI/deepwiki-open&Date)
 
 
 
1
  ---
2
  title: DeepWiki Open
3
  emoji: 🧠
 
6
  sdk: docker
7
  app_port: 3000
8
  pinned: false
9
+ ---