Instructions to use Yueha0/FoodLMM-Chat with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Yueha0/FoodLMM-Chat with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Yueha0/FoodLMM-Chat")# Load model directly from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("Yueha0/FoodLMM-Chat") model = AutoModelForCausalLM.from_pretrained("Yueha0/FoodLMM-Chat") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Yueha0/FoodLMM-Chat with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Yueha0/FoodLMM-Chat" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Yueha0/FoodLMM-Chat", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Yueha0/FoodLMM-Chat
- SGLang
How to use Yueha0/FoodLMM-Chat with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Yueha0/FoodLMM-Chat" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Yueha0/FoodLMM-Chat", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Yueha0/FoodLMM-Chat" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Yueha0/FoodLMM-Chat", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Yueha0/FoodLMM-Chat with Docker Model Runner:
docker model run hf.co/Yueha0/FoodLMM-Chat
The output cannot get actually value but meaningless symbol
Hello, I am really interested in FoodLMM and I think this work is useful and creative.
But it seems the model file doesn't involve a processor, so that we cannot directly use it among huggingface.
So I tried to use LISA and change the model to FoodLMMM, and I find that when I ask the model about the nutrition information. It cannot out put the actually answer about the value of nutrients, but output [MASS_TOTAL] g /[CAL_TOTAL] kcal. Which is meaningless, can you figure out what is happened?
And also I cannot generate picture... Do you know what's wrong with it?
As FoodLMM has specific heads for outputting the actual value of these tokens, you can refer to our github repository at https://github.com/YuehaoYin/FoodLMM for conducting inference with the trained model.

