Instructions to use llmware/slim-extract-tiny-onnx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use llmware/slim-extract-tiny-onnx with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="llmware/slim-extract-tiny-onnx")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llmware/slim-extract-tiny-onnx") model = AutoModelForCausalLM.from_pretrained("llmware/slim-extract-tiny-onnx") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use llmware/slim-extract-tiny-onnx with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "llmware/slim-extract-tiny-onnx" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "llmware/slim-extract-tiny-onnx", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/llmware/slim-extract-tiny-onnx
- SGLang
How to use llmware/slim-extract-tiny-onnx with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "llmware/slim-extract-tiny-onnx" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "llmware/slim-extract-tiny-onnx", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "llmware/slim-extract-tiny-onnx" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "llmware/slim-extract-tiny-onnx", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use llmware/slim-extract-tiny-onnx with Docker Model Runner:
docker model run hf.co/llmware/slim-extract-tiny-onnx
slim-extract-tiny-onnx
slim-extract-tiny-onnx is a specialized function calling model with a single mission to look for values in a text, based on an "extract" key that is passed as a parameter. No other instructions are required except to pass the context passage, and the target key, and the model will generate a python dictionary consisting of the extract key and a list of the values found in the text, including an 'empty list' if the text does not provide an answer for the value of the selected key.
This is an ONNX int4 quantized version of slim-extract-tiny, providing a very fast, very small inference implementation, optimized for AI PCs.
Model Description
- Developed by: llmware
- Model type: tinyllama
- Parameters: 1.1 billion
- Model Parent: llmware/slim-extract-tiny
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: Extraction of values from complex business documents
- RAG Benchmark Accuracy Score: NA
- Quantization: int4
Model Card Contact
- Downloads last month
- 10
Model tree for llmware/slim-extract-tiny-onnx
Base model
llmware/slim-extract-tiny
docker model run hf.co/llmware/slim-extract-tiny-onnx