Update README.md
Browse files
README.md
CHANGED
|
@@ -3,28 +3,23 @@ license: cc-by-sa-4.0
|
|
| 3 |
inference: false
|
| 4 |
---
|
| 5 |
|
| 6 |
-
# SLIM-
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
-
**slim-
|
| 11 |
-
|
| 12 |
-
`{'sentiment': ['positive'], people': ['..'], 'organization': ['..'],'place': ['..]}`
|
| 13 |
-
|
| 14 |
-
This 'combo' model is designed to illustrate the potential power of using function calls on small, specialized models to enable a single model architecture to combine the capabilities of what were traditionally two separate model architectures on an encoder.
|
| 15 |
-
|
| 16 |
-
The intent of SLIMs is to forge a middle-ground between traditional encoder-based classifiers and open-ended API-based LLMs, providing an intuitive, flexible natural language response, without complex prompting, and with improved generalization and ability to fine-tune to a specific domain use case.
|
| 17 |
|
|
|
|
| 18 |
|
| 19 |
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
|
| 24 |
## Prompt format:
|
| 25 |
|
| 26 |
-
`function = "
|
| 27 |
-
`params = "
|
| 28 |
`prompt = "<human> " + {text} + "\n" + `
|
| 29 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
| 30 |
|
|
@@ -32,11 +27,11 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-sa-ner-3b-tool'*
|
|
| 32 |
<details>
|
| 33 |
<summary>Transformers Script </summary>
|
| 34 |
|
| 35 |
-
model = AutoModelForCausalLM.from_pretrained("llmware/slim-
|
| 36 |
-
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-
|
| 37 |
|
| 38 |
-
function = "
|
| 39 |
-
params = "
|
| 40 |
|
| 41 |
text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
|
| 42 |
|
|
@@ -75,8 +70,8 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-sa-ner-3b-tool'*
|
|
| 75 |
<summary>Using as Function Call in LLMWare</summary>
|
| 76 |
|
| 77 |
from llmware.models import ModelCatalog
|
| 78 |
-
slim_model = ModelCatalog().load_model("llmware/slim-
|
| 79 |
-
response = slim_model.function_call(text,params=["
|
| 80 |
|
| 81 |
print("llmware - llm_response: ", response)
|
| 82 |
|
|
|
|
| 3 |
inference: false
|
| 4 |
---
|
| 5 |
|
| 6 |
+
# SLIM-BOOLEAN
|
| 7 |
|
| 8 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 9 |
|
| 10 |
+
**slim-boolean** is an experimental model designed to implement a boolean function call using a 2.7B parameter specialized model. As an input, the model takes a context passage, a yes-no question, and an optional (explain) parameter, and as output, the model generates a python dictionary with two keys - 'answer' which contains the 'yes/no' classification, and 'explain' which provides a text snippet from the passage that was the basis for the classification, e.g.:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
+
`{'answer': ['yes'], 'explanation': ['the results exceeded expectations by 3%'] }`
|
| 13 |
|
| 14 |
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
| 15 |
|
| 16 |
+
For fast inference, we would recommend using the'quantized tool' version, e.g., [**'slim-boolean-tool'**](https://huggingface.co/llmware/slim-sa-ner-3b-tool).
|
| 17 |
|
| 18 |
|
| 19 |
## Prompt format:
|
| 20 |
|
| 21 |
+
`function = "boolean"`
|
| 22 |
+
`params = "{insert yes-no-question} (explain)"`
|
| 23 |
`prompt = "<human> " + {text} + "\n" + `
|
| 24 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
| 25 |
|
|
|
|
| 27 |
<details>
|
| 28 |
<summary>Transformers Script </summary>
|
| 29 |
|
| 30 |
+
model = AutoModelForCausalLM.from_pretrained("llmware/slim-boolean")
|
| 31 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-boolean")
|
| 32 |
|
| 33 |
+
function = "boolean"
|
| 34 |
+
params = "did tesla stock price increase? (explain) "
|
| 35 |
|
| 36 |
text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue."
|
| 37 |
|
|
|
|
| 70 |
<summary>Using as Function Call in LLMWare</summary>
|
| 71 |
|
| 72 |
from llmware.models import ModelCatalog
|
| 73 |
+
slim_model = ModelCatalog().load_model("llmware/slim-boolean")
|
| 74 |
+
response = slim_model.function_call(text,params=["did the stock price increase? (explain)"], function="boolean")
|
| 75 |
|
| 76 |
print("llmware - llm_response: ", response)
|
| 77 |
|