How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Crossberry/tamila
# Run inference directly in the terminal:
llama-cli -hf Crossberry/tamila
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Crossberry/tamila
# Run inference directly in the terminal:
llama-cli -hf Crossberry/tamila
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Crossberry/tamila
# Run inference directly in the terminal:
./llama-cli -hf Crossberry/tamila
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Crossberry/tamila
# Run inference directly in the terminal:
./build/bin/llama-cli -hf Crossberry/tamila
Use Docker
docker model run hf.co/Crossberry/tamila
Quick Links

πŸš€ Tamila Master v0.3

Created by crossberryweb

Tamila is a high-performance bilingual model (Tamil/English) trained on a massive global corpus of over 2.2 million segments.

πŸ”— Project Links

πŸ“Š Model Benchmarks

Task Dataset Accuracy Loss
Global Corpus Tuning 2.2M Segments 1.0000 6.64e-10
Literature (Thirukkural) Kaggle NLP 0.9868 0.0612
Technical (Kimi K2) PDF Extract 1.0000 1.17e-06

πŸ›  Future Roadmap

  • Integration with advanced Transformer architectures.
  • Expanded support for regional Tamil dialects.
  • Real-time API integration for mobile applications.

πŸ“– More Info

This model utilizes a custom MLP architecture optimized for GGUF deployment. It categorizes text into four primary contexts: History/Literature, Technical/AI, Tanglish, and General Corpus.


Developed for the open-source community by Crossberryweb.

Downloads last month
32
GGUF
Model size
81.9M params
Architecture
gpt2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 1 Ask for provider support

Space using Crossberry/tamila 1