Update README.md
Browse files
README.md
CHANGED
|
@@ -7,13 +7,13 @@ inference: false
|
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
-
**slim-boolean** is an experimental model designed to implement a boolean function call using a 2.7B parameter specialized model. As an input, the model takes a context passage, a yes-no question, and an optional (explain) parameter, and as output, the model generates a python dictionary with two keys - 'answer' which contains the 'yes/no' classification, and 'explain' which provides a text snippet from the passage that was the basis for the classification, e.g.:
|
| 11 |
|
| 12 |
`{'answer': ['yes'], 'explanation': ['the results exceeded expectations by 3%'] }`
|
| 13 |
|
| 14 |
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
| 15 |
|
| 16 |
-
For fast inference, we would recommend using the'quantized tool' version, e.g., [**'slim-boolean-tool'**](https://huggingface.co/llmware/slim-
|
| 17 |
|
| 18 |
|
| 19 |
## Prompt format:
|
|
|
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
+
**slim-boolean** is an experimental model designed to implement a boolean question answering function call using a 2.7B parameter specialized model. As an input, the model takes a context passage, a yes-no question, and an optional (explain) parameter, and as output, the model generates a python dictionary with two keys - 'answer' which contains the 'yes/no' classification, and 'explain' which provides a text snippet from the passage that was the basis for the classification, e.g.:
|
| 11 |
|
| 12 |
`{'answer': ['yes'], 'explanation': ['the results exceeded expectations by 3%'] }`
|
| 13 |
|
| 14 |
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
| 15 |
|
| 16 |
+
For fast inference, we would recommend using the'quantized tool' version, e.g., [**'slim-boolean-tool'**](https://huggingface.co/llmware/slim-boolean-tool).
|
| 17 |
|
| 18 |
|
| 19 |
## Prompt format:
|