Instructions to use Intel/tiny-random-falcon with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Intel/tiny-random-falcon with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Intel/tiny-random-falcon")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Intel/tiny-random-falcon") model = AutoModelForCausalLM.from_pretrained("Intel/tiny-random-falcon") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Intel/tiny-random-falcon with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Intel/tiny-random-falcon" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/tiny-random-falcon", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Intel/tiny-random-falcon
- SGLang
How to use Intel/tiny-random-falcon with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Intel/tiny-random-falcon" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/tiny-random-falcon", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Intel/tiny-random-falcon" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Intel/tiny-random-falcon", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Intel/tiny-random-falcon with Docker Model Runner:
docker model run hf.co/Intel/tiny-random-falcon
Upload 3 files
Browse files- special_tokens_map.json +16 -0
- tokenizer.json +0 -0
- tokenizer_config.json +12 -0
special_tokens_map.json
ADDED
|
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"additional_special_tokens": [
|
| 3 |
+
">>TITLE<<",
|
| 4 |
+
">>ABSTRACT<<",
|
| 5 |
+
">>INTRODUCTION<<",
|
| 6 |
+
">>SUMMARY<<",
|
| 7 |
+
">>COMMENT<<",
|
| 8 |
+
">>ANSWER<<",
|
| 9 |
+
">>QUESTION<<",
|
| 10 |
+
">>DOMAIN<<",
|
| 11 |
+
">>PREFIX<<",
|
| 12 |
+
">>SUFFIX<<",
|
| 13 |
+
">>MIDDLE<<"
|
| 14 |
+
],
|
| 15 |
+
"eos_token": "<|endoftext|>"
|
| 16 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_prefix_space": false,
|
| 3 |
+
"eos_token": "<|endoftext|>",
|
| 4 |
+
"model_input_names": [
|
| 5 |
+
"input_ids",
|
| 6 |
+
"attention_mask"
|
| 7 |
+
],
|
| 8 |
+
"model_max_length": 2048,
|
| 9 |
+
"name_or_path": "tiiuae/falcon_tokenizer",
|
| 10 |
+
"special_tokens_map_file": null,
|
| 11 |
+
"tokenizer_class": "PreTrainedTokenizerFast"
|
| 12 |
+
}
|