Our chat window, showing a conversation with the Transformers library. 🚀
# Getting started
## Installation
Using pipx (for regular users)
Make sure pipx is installed on your system (see instructions), then run:
```
pipx install git+https://github.com/Storia-AI/sage.git@main
```
Using venv and pip (for contributors)
Alternatively, you can manually create a virtual environment and install Code Sage via pip:
```
python -m venv sage-venv
source sage-venv/bin/activate
git clone https://github.com/Storia-AI/sage.git
cd sage
pip install -e .
```
## Prerequisites
`sage` performs two steps:
1. Indexes your codebase (requiring an embedder and a vector store)
2. Enables chatting via LLM + RAG (requiring access to an LLM)
:computer: Running locally (lower quality)
1. To index the codebase locally, we use the open-source project Marqo, which is both an embedder and a vector store. To bring up a Marqo instance:
```
docker rm -f marqo
docker pull marqoai/marqo:latest
docker run --name marqo -it -p 8882:8882 marqoai/marqo:latest
```
This will open a persistent Marqo console window. This should take around 2-3 minutes on a fresh install.
2. To chat with an LLM locally, we use Ollama:
- Head over to [ollama.com](https://ollama.com) to download the appropriate binary for your machine.
- Open a new terminal window
- Pull the desired model, e.g. `ollama pull llama3.1`.
:cloud: Using external providers (higher quality)
1. For embeddings, we support OpenAI and Voyage. According to [our experiments](benchmarks/retrieval/README.md), OpenAI is better quality. Their batch API is also faster, with more generous rate limits. Export the API key of the desired provider:
```
export OPENAI_API_KEY=... # or
export VOYAGE_API_KEY=...
```
2. We use Pinecone for the vector store, so you will need an API key:
```
export PINECONE_API_KEY=...
```
If you want to reuse an existing Pinecone index, specify it. Otherwise we'll create a new one called `sage`.
```
export PINECONE_INDEX_NAME=...
```
3. For reranking, we support NVIDIA, Voyage, Cohere, and Jina.
- According to [our experiments](benchmark/retrieval/README.md), NVIDIA performs best. To get an API key, follow [these instructions](https://docs.nvidia.com/nim/large-language-models/latest/getting-started.html#generate-an-api-key). Note that NVIDIA's API keys are model-specific. We recommend using `nvidia/nv-rerankqa-mistral-4b-v3`.
- Export the API key of the desired provider:
```
export NVIDIA_API_KEY=... # or
export VOYAGE_API_KEY=... # or
export COHERE_API_KEY=... # or
export JINA_API_KEY=...
```
4. For chatting with an LLM, we support OpenAI and Anthropic. For the latter, set an additional API key:
```
export ANTHROPIC_API_KEY=...
```
For easier configuration, adapt the entries within the sample `.sage-env` (change the API keys names based on your desired setup) and run:
```
source .sage-env
```
### Optional
If you are planning on indexing GitHub issues in addition to the codebase, you will need a GitHub token:
export GITHUB_TOKEN=...
## Running it
1. Select your desired repository:
```
export GITHUB_REPO=huggingface/transformers
```
2. Index the repository. This might take a few minutes, depending on its size.
```
sage-index $GITHUB_REPO
```
To use external providers instead of running locally, set `--mode=remote`.
3. Chat with the repository, once it's indexed:
```
sage-chat $GITHUB_REPO
```
To use external providers instead of running locally, set `--mode=remote`.
### Notes:
- To get a public URL for your chat app, set `--share=true`.
- You can overwrite the default settings (e.g. desired embedding model or LLM) via command line flags. Run `sage-index --help` or `sage-chat --help` for a full list.
## Additional features
:lock: Working with private repositories
To index and chat with a private repository, simply set the `GITHUB_TOKEN` environment variable. To obtain this token, go to github.com > click on your profile icon > Settings > Developer settings > Personal access tokens. You can either make a fine-grained token for the desired repository, or a classic token.
```
export GITHUB_TOKEN=...
```
:hammer_and_wrench: Control which files get indexed
You can specify an inclusion or exclusion file in the following format:
```
# This is a comment
ext:.my-ext-1
ext:.my-ext-2
ext:.my-ext-3
dir:my-dir-1
dir:my-dir-2
dir:my-dir-3
file:my-file-1.md
file:my-file-2.py
file:my-file-3.cpp
```
where:
- `ext` specifies a file extension
- `dir` specifies a directory. This is not a full path. For instance, if you specify `dir:tests` in an exclusion directory, then a file like `/path/to/my/tests/file.py` will be ignored.
- `file` specifies a file name. This is also not a full path. For instance, if you specify `file:__init__.py`, then a file like `/path/to/my/__init__.py` will be ignored.
To specify an inclusion file (i.e. only index the specified files):
```
sage-index $GITHUB_REPO --include=/path/to/inclusion/file
```
To specify an exclusion file (i.e. index all files, except for the ones specified):
```
sage-index $GITHUB_REPO --exclude=/path/to/exclusion/file
```
By default, we use the exclusion file [sample-exclude.txt](sage/sample-exclude.txt).
:bug: Index open GitHub issues
You will need a GitHub token first:
```
export GITHUB_TOKEN=...
```
To index GitHub issues without comments:
```
sage-index $GITHUB_REPO --index-issues
```
To index GitHub issues with comments:
```
sage-index $GITHUB_REPO --index-issues --index-issue-comments
```
To index GitHub issues, but not the codebase:
```
sage-index $GITHUB_REPO --index-issues --no-index-repo
```
:books: Experiment with retrieval strategies
Retrieving the right files from the vector database is arguably the quality bottleneck of the system. We are actively experimenting with various retrieval strategies and documenting our findings [here](benchmark/retrieval/README.md).
Currently, we support the following types of retrieval:
- **Vanilla RAG** from a vector database (nearest neighbor between dense embeddings). This is the default.
- **Hybrid RAG** that combines dense retrieval (embeddings-based) with sparse retrieval (BM25). Use `--retrieval-alpha` to weigh the two strategies.
- A value of 1 means dense-only retrieval and 0 means BM25-only retrieval.
- Note this is not available when running locally, only when using Pinecone as a vector store.
- Contrary to [Anthropic's findings](https://www.anthropic.com/news/contextual-retrieval), we find that BM25 is actually damaging performance *on codebases*, because it gives undeserved advantage to Markdown files.
- **Multi-query retrieval** performs multiple query rewrites, makes a separate retrieval call for each, and takes the union of the retrieved documents. You can activate it by passing `--multi-query-retrieval`. This can be combined with both vanilla and hybrid RAG.
- We find that [on our benchmark](benchmark/retrieval/README.md) this only marginally improves retrieval quality (from 0.44 to 0.46 R-precision) while being significantly slower and more expensive due to LLM calls. But your mileage may vary.
- **LLM-only retrieval** completely circumvents indexing the codebase. We simply enumerate all file paths and pass them to an LLM together with the user query. We ask the LLM which files are likely to be relevant for the user query, solely based on their filenames. You can activate it by passing `--llm-retriever`.
- We find that [on our benchmark](benchmark/retrieval/README.md) the performance is comparable with vector database solutions (R-precision is 0.44 for both). This is quite remarkable, since we've saved so much effort by not indexing the codebase. However, we are reluctant to claim that these findings generalize, for the following reasons:
- Our (artificial) dataset occasionally contains explicit path names in the query, making it trivial for the LLM. Sample query: *"Alice is managing a series of machine learning experiments. Please explain in detail how `main` in `examples/pytorch/image-pretraining/run_mim.py` allows her to organize the outputs of each experiment in separate directories."*
- Our benchmark focuses on the Transformers library, which is well-maintained and the file paths are often meaningful. This might not be the case for all codebases.
# Why chat with a codebase?
Sometimes you just want to learn how a codebase works and how to integrate it, without spending hours sifting through
the code itself.
`sage` is like an open-source GitHub Copilot with the most up-to-date information about your repo.
Features:
- **Dead-simple set-up.** Run *two scripts* and you have a functional chat interface for your code. That's really it.
- **Heavily documented answers.** Every response shows where in the code the context for the answer was pulled from. Let's build trust in the AI.
- **Runs locally or on the cloud.**
- **Plug-and-play.** Want to improve the algorithms powering the code understanding/generation? We've made every component of the pipeline easily swappable. Google-grade engineering standards allow you to customize to your heart's content.
# Changelog
- 2024-09-16: Renamed `repo2vec` to `sage`.
- 2024-09-03: Support for indexing GitHub issues.
- 2024-08-30: Support for running everything locally (Marqo for embeddings, Ollama for LLMs).
# Want your repository hosted?
We're working to make all code on the internet searchable and understandable for devs. You can check out our early product, [Code Sage](https://sage.storia.ai). We pre-indexed a slew of OSS repos, and you can index your desired ones by simply pasting a GitHub URL.
If you're the maintainer of an OSS repo and would like a dedicated page on Code Sage (e.g. `sage.storia.ai/your-repo`), then send us a message at [founders@storia.ai](mailto:founders@storia.ai). We'll do it for free!

# Extensions & Contributions
We built the code purposefully modular so that you can plug in your desired embeddings, LLM and vector stores providers by simply implementing the relevant abstract classes.
Feel free to send feature requests to [founders@storia.ai](mailto:founders@storia.ai) or make a pull request!