Spaces:
Running
Running
Merge branch 'main' into pdf-render
Browse files- README.md +1 -1
- streamlit_app.py +1 -1
README.md
CHANGED
|
@@ -42,7 +42,7 @@ Allow to change the number of blocks from the original document that are conside
|
|
| 42 |
The default size of each block is 250 tokens (which can be changed before uploading the first document).
|
| 43 |
With default settings, each question uses around 1000 tokens.
|
| 44 |
|
| 45 |
-
**NOTE**: if the chat answers something like "the information is not provided in the given context", **changing the context size
|
| 46 |
|
| 47 |
### Chunks size
|
| 48 |
When uploaded, each document is split into blocks of a determined size (250 tokens by default).
|
|
|
|
| 42 |
The default size of each block is 250 tokens (which can be changed before uploading the first document).
|
| 43 |
With default settings, each question uses around 1000 tokens.
|
| 44 |
|
| 45 |
+
**NOTE**: if the chat answers something like "the information is not provided in the given context", **changing the context size will likely help**.
|
| 46 |
|
| 47 |
### Chunks size
|
| 48 |
When uploaded, each document is split into blocks of a determined size (250 tokens by default).
|
streamlit_app.py
CHANGED
|
@@ -163,7 +163,7 @@ with st.sidebar:
|
|
| 163 |
st.session_state['model'] = model = st.radio(
|
| 164 |
"Model",
|
| 165 |
("chatgpt-3.5-turbo", "mistral-7b-instruct-v0.1", "zephyr-7b-beta"),
|
| 166 |
-
index=
|
| 167 |
captions=[
|
| 168 |
"ChatGPT 3.5 Turbo + Ada-002-text (embeddings)",
|
| 169 |
"Mistral-7B-Instruct-V0.1 + Sentence BERT (embeddings) :free:",
|
|
|
|
| 163 |
st.session_state['model'] = model = st.radio(
|
| 164 |
"Model",
|
| 165 |
("chatgpt-3.5-turbo", "mistral-7b-instruct-v0.1", "zephyr-7b-beta"),
|
| 166 |
+
index=2,
|
| 167 |
captions=[
|
| 168 |
"ChatGPT 3.5 Turbo + Ada-002-text (embeddings)",
|
| 169 |
"Mistral-7B-Instruct-V0.1 + Sentence BERT (embeddings) :free:",
|