Instructions to use LumiOpen/Poro-34B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LumiOpen/Poro-34B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LumiOpen/Poro-34B")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LumiOpen/Poro-34B") model = AutoModelForCausalLM.from_pretrained("LumiOpen/Poro-34B") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LumiOpen/Poro-34B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LumiOpen/Poro-34B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LumiOpen/Poro-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/LumiOpen/Poro-34B
- SGLang
How to use LumiOpen/Poro-34B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LumiOpen/Poro-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LumiOpen/Poro-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LumiOpen/Poro-34B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LumiOpen/Poro-34B", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use LumiOpen/Poro-34B with Docker Model Runner:
docker model run hf.co/LumiOpen/Poro-34B
Clarification on the regex of the tokenizer configuration
Your JSON tokenization config uses the following regex to split the input.
" ?[^(\\s|[.,!?\u2026\u3002\uff0c\u3001\u0964\u06d4\u060c])]+"
or \s?[^(\s|[.,!?…。,、।۔،])]+
I don't understand why you have nested brackets for [.,!?…。,、।۔،]
Why do you separate \s from the rest of the characters.
Also also why do you try to capture (with parenthesis)?
Wouldn't it be the same as ?[^\s.,!?…。,、।۔،]+ ?
Maybe there is a fancy regex pattern I don't know.
For the context, I try to load this configuration with .NET C# and the standard regex engine doesn't not understand this regex.
I'm sorry, but I don't know why these decisions were made. We inherited the splitting in this tokenizer, maybe from BLOOM? I forget exactly.
I think you might be right, though, that the nested brackets are not required. It honestly looks like the person who wrote this was a little bit confused, but I don't really know the larger context or what the intentions were.
It might be worth looking at Llama.cpp's support for this tokenizer, it might be more portable.