Spaces:
Running
Running
Minor README clarifications (#29)
Browse files* Default readme remote options to closed
* Updated shell commands to be able to be run immediately on copy
* Rename index/chat commands to avoid clash with local utilities (#31)
* Updated shell commands to be able to be run immediately on copy
* Update README.md
---------
Co-authored-by: Julia Turc <turc.raluca@gmail.com>
README.md
CHANGED
|
@@ -31,9 +31,12 @@ To install the library, simply run `pip install repo2vec`!
|
|
| 31 |
docker run --name marqo -it -p 8882:8882 marqoai/marqo:latest
|
| 32 |
```
|
| 33 |
|
|
|
|
|
|
|
| 34 |
2. To chat with an LLM locally, we use <a href="https://github.com/ollama/ollama">Ollama</a>:
|
| 35 |
|
| 36 |
- Head over to [ollama.com](https://ollama.com) to download the appropriate binary for your machine.
|
|
|
|
| 37 |
- Pull the desired model, e.g. `ollama pull llama3.1`.
|
| 38 |
|
| 39 |
</details>
|
|
@@ -66,16 +69,18 @@ If you are planning on indexing GitHub issues in addition to the codebase, you w
|
|
| 66 |
|
| 67 |
<details open>
|
| 68 |
<summary><strong>:computer: Running locally</strong></summary>
|
| 69 |
-
<p>To index the codebase
|
| 70 |
|
| 71 |
-
|
|
|
|
| 72 |
--embedder-type=marqo \
|
| 73 |
--vector-store-type=marqo \
|
| 74 |
--index-name=your-index-name
|
| 75 |
|
| 76 |
-
<p> To chat with your codebase:</p>
|
| 77 |
|
| 78 |
-
|
|
|
|
| 79 |
--vector-store-type=marqo \
|
| 80 |
--index-name=your-index-name \
|
| 81 |
--llm-provider=ollama \
|
|
@@ -84,16 +89,18 @@ If you are planning on indexing GitHub issues in addition to the codebase, you w
|
|
| 84 |
|
| 85 |
<details>
|
| 86 |
<summary><strong>:cloud: Using external providers</strong></summary>
|
| 87 |
-
<p>To index the codebase
|
| 88 |
|
| 89 |
-
|
|
|
|
| 90 |
--embedder-type=openai \
|
| 91 |
--vector-store-type=pinecone \
|
| 92 |
--index-name=your-index-name
|
| 93 |
|
| 94 |
-
<p> To chat with your codebase:</p>
|
| 95 |
|
| 96 |
-
|
|
|
|
| 97 |
--vector-store-type=pinecone \
|
| 98 |
--index-name=your-index-name \
|
| 99 |
--llm-provider=openai \
|
|
|
|
| 31 |
docker run --name marqo -it -p 8882:8882 marqoai/marqo:latest
|
| 32 |
```
|
| 33 |
|
| 34 |
+
This will open a persistent Marqo console window. This should take around 2-3 minutes on a fresh install.
|
| 35 |
+
|
| 36 |
2. To chat with an LLM locally, we use <a href="https://github.com/ollama/ollama">Ollama</a>:
|
| 37 |
|
| 38 |
- Head over to [ollama.com](https://ollama.com) to download the appropriate binary for your machine.
|
| 39 |
+
- Open a new terminal window
|
| 40 |
- Pull the desired model, e.g. `ollama pull llama3.1`.
|
| 41 |
|
| 42 |
</details>
|
|
|
|
| 69 |
|
| 70 |
<details open>
|
| 71 |
<summary><strong>:computer: Running locally</strong></summary>
|
| 72 |
+
<p>To index the codebase, run this command. This should take a few minutes, depending on the repo size.</p>
|
| 73 |
|
| 74 |
+
# this can be any GitHub repository in the format ORG_NAME/REPO_NAME
|
| 75 |
+
r2v-index Storia-AI/repo2vec \
|
| 76 |
--embedder-type=marqo \
|
| 77 |
--vector-store-type=marqo \
|
| 78 |
--index-name=your-index-name
|
| 79 |
|
| 80 |
+
<p> To chat with your codebase, run this command:</p>
|
| 81 |
|
| 82 |
+
# this can be any GitHub repository in the format ORG_NAME/REPO_NAME
|
| 83 |
+
r2v-chat Storia-AI/repo2vec \
|
| 84 |
--vector-store-type=marqo \
|
| 85 |
--index-name=your-index-name \
|
| 86 |
--llm-provider=ollama \
|
|
|
|
| 89 |
|
| 90 |
<details>
|
| 91 |
<summary><strong>:cloud: Using external providers</strong></summary>
|
| 92 |
+
<p>To index the codebase, run this command. This should take a few minutes, depending on the repo size.</p>
|
| 93 |
|
| 94 |
+
# this can be any GitHub repository in the format ORG_NAME/REPO_NAME
|
| 95 |
+
r2v-index Storia-AI/repo2vec \
|
| 96 |
--embedder-type=openai \
|
| 97 |
--vector-store-type=pinecone \
|
| 98 |
--index-name=your-index-name
|
| 99 |
|
| 100 |
+
<p> To chat with your codebase, run this command:</p>
|
| 101 |
|
| 102 |
+
# this can be any GitHub repository in the format ORG_NAME/REPO_NAME
|
| 103 |
+
r2v-chat Storia-AI/repo2vec \
|
| 104 |
--vector-store-type=pinecone \
|
| 105 |
--index-name=your-index-name \
|
| 106 |
--llm-provider=openai \
|