Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ license: apache-2.0
|
|
| 10 |
|
| 11 |
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of JSON dictionary corresponding to specified keys.
|
| 12 |
|
| 13 |
-
Each slim model has a corresponding 'tool' in a separate repository, e.g., [**'slim-sentiment-tool'**](
|
| 14 |
|
| 15 |
Inference speed and loading time is much faster with the 'tool' versions of the model.
|
| 16 |
|
|
@@ -47,7 +47,6 @@ All of the SLIM models use a novel prompt instruction structured as follows:
|
|
| 47 |
|
| 48 |
The fastest way to get started with BLING is through direct import in transformers:
|
| 49 |
|
| 50 |
-
'''python
|
| 51 |
import ast
|
| 52 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 53 |
|
|
|
|
| 10 |
|
| 11 |
slim-sentiment has been fine-tuned for **sentiment analysis** function calls, generating output consisting of JSON dictionary corresponding to specified keys.
|
| 12 |
|
| 13 |
+
Each slim model has a corresponding 'tool' in a separate repository, e.g., [**'slim-sentiment-tool'**](https://huggingface.co/llmware/slim-sentiment-tool), which a 4-bit quantized gguf version of the model that is intended to be used for inference.
|
| 14 |
|
| 15 |
Inference speed and loading time is much faster with the 'tool' versions of the model.
|
| 16 |
|
|
|
|
| 47 |
|
| 48 |
The fastest way to get started with BLING is through direct import in transformers:
|
| 49 |
|
|
|
|
| 50 |
import ast
|
| 51 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 52 |
|