File size: 1,702 Bytes
5863ad9 81e29c0 5863ad9 f1d9b11 5ec9bed f1d9b11 5ec9bed f1d9b11 09ec96f f1d9b11 81e29c0 f1d9b11 81e29c0 09ec96f f1d9b11 81e29c0 f1d9b11 81e29c0 09ec96f 81e29c0 f1d9b11 81e29c0 09ec96f f1d9b11 81e29c0 f1d9b11 81e29c0 f1d9b11 81e29c0 09ec96f f1d9b11 09ec96f f1d9b11 81e29c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
license: apache-2.0
---
# SLIM-TAGS-TOOL
<!-- Provide a quick summary of what the model is/does. -->
**slim-tags-tool** is a 4_K_M quantized GGUF version of slim-tags, providing a small, fast inference implementation, optimized for multi-model concurrent deployment.
[**slim-tags**](https://huggingface.co/llmware/slim-tags) is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
To pull the model via API:
from huggingface_hub import snapshot_download
snapshot_download("llmware/slim-tags-tool", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)
Load in your favorite GGUF inference engine, or try with llmware as follows:
from llmware.models import ModelCatalog
# to load the model and make a basic inference
model = ModelCatalog().load_model("slim-tags-tool")
response = model.function_call(text_sample)
# this one line will download the model and run a series of tests
ModelCatalog().tool_test_run("slim-tags-tool", verbose=True)
Slim models can also be loaded even more simply as part of a multi-model, multi-step LLMfx calls:
from llmware.agents import LLMfx
llm_fx = LLMfx()
llm_fx.load_tool("tags")
response = llm_fx.tags(text)
Note: please review [**config.json**](https://huggingface.co/llmware/slim-tags-tool/blob/main/config.json) in the repository for prompt wrapping information, details on the model, and full test set.
## Model Card Contact
Darren Oberst & llmware team
[Any questions? Join us on Discord](https://discord.gg/MhZn5Nc39h)
|