Instructions to use moemoe101/tinyllama-AISmartRecipe-Lite with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use moemoe101/tinyllama-AISmartRecipe-Lite with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="moemoe101/tinyllama-AISmartRecipe-Lite", filename="tinyllama lite 32.32.1.2e-5.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use moemoe101/tinyllama-AISmartRecipe-Lite with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf moemoe101/tinyllama-AISmartRecipe-Lite # Run inference directly in the terminal: llama-cli -hf moemoe101/tinyllama-AISmartRecipe-Lite
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf moemoe101/tinyllama-AISmartRecipe-Lite # Run inference directly in the terminal: llama-cli -hf moemoe101/tinyllama-AISmartRecipe-Lite
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf moemoe101/tinyllama-AISmartRecipe-Lite # Run inference directly in the terminal: ./llama-cli -hf moemoe101/tinyllama-AISmartRecipe-Lite
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf moemoe101/tinyllama-AISmartRecipe-Lite # Run inference directly in the terminal: ./build/bin/llama-cli -hf moemoe101/tinyllama-AISmartRecipe-Lite
Use Docker
docker model run hf.co/moemoe101/tinyllama-AISmartRecipe-Lite
- LM Studio
- Jan
- Ollama
How to use moemoe101/tinyllama-AISmartRecipe-Lite with Ollama:
ollama run hf.co/moemoe101/tinyllama-AISmartRecipe-Lite
- Unsloth Studio new
How to use moemoe101/tinyllama-AISmartRecipe-Lite with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for moemoe101/tinyllama-AISmartRecipe-Lite to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for moemoe101/tinyllama-AISmartRecipe-Lite to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for moemoe101/tinyllama-AISmartRecipe-Lite to start chatting
- Docker Model Runner
How to use moemoe101/tinyllama-AISmartRecipe-Lite with Docker Model Runner:
docker model run hf.co/moemoe101/tinyllama-AISmartRecipe-Lite
- Lemonade
How to use moemoe101/tinyllama-AISmartRecipe-Lite with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull moemoe101/tinyllama-AISmartRecipe-Lite
Run and chat with the model
lemonade run user.tinyllama-AISmartRecipe-Lite-{{QUANT_TAG}}List all available models
lemonade list
This is my very first finetuned model that i have created for a recipe generating AI model based from ingredients inputted by the user.
However, I found that using a big dataset to finetune a small model like tinyllama to result in bad inference performance. The result using tinyllama seems to be unsatisfactory at the current time.
I will be looking further into improving the performance of small models finetuned for cooking recipes. As there are many benefits to using a small model.
The basis of the model is tinyllama, finetuned with the recipe_nlg dataset which is further filtered down to include most of the best recipes as the data.
There are many versions of the model as I experiment with a lot of the hyperparameters. The name of the model indicates which parameters that was used when training for that model. The parameters starting from Lora rank, alpha, training steps, and lastly the learning rate.
There are many intricacies to the model based on which parameters i adjusted. So not all model has the same consistency or performance. Some version of the model performed much better than the others etc.
The best way to use this model includes a prompt of "Create a detailed step by step cooking recipe instructions based on these ingredients". And then inputting what items you would like to include in said recipe.
Any feedback is welcomed. You can contact me on discord under the discord username _moemoe_
- Downloads last month
- 42
We're not able to determine the quantization variants.