Messages Summarization project
Collection
ITMO project for drivers' automatic messages summarization and JSON structuring. • 4 items • Updated
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Salexoid/tiny-rullama-1b:Q8_0# Run inference directly in the terminal:
llama-cli -hf Salexoid/tiny-rullama-1b:Q8_0# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Salexoid/tiny-rullama-1b:Q8_0# Run inference directly in the terminal:
./llama-cli -hf Salexoid/tiny-rullama-1b:Q8_0git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Salexoid/tiny-rullama-1b:Q8_0# Run inference directly in the terminal:
./build/bin/llama-cli -hf Salexoid/tiny-rullama-1b:Q8_0docker model run hf.co/Salexoid/tiny-rullama-1b:Q8_0This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the None dataset.
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
8-bit
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Salexoid/tiny-rullama-1b:Q8_0# Run inference directly in the terminal: llama-cli -hf Salexoid/tiny-rullama-1b:Q8_0