How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Crossberry/tamila# Run inference directly in the terminal:
llama-cli -hf Crossberry/tamilaUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Crossberry/tamila# Run inference directly in the terminal:
./llama-cli -hf Crossberry/tamilaBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Crossberry/tamila# Run inference directly in the terminal:
./build/bin/llama-cli -hf Crossberry/tamilaUse Docker
docker model run hf.co/Crossberry/tamilaQuick Links
π Tamila Master v0.3
Created by crossberryweb
Tamila is a high-performance bilingual model (Tamil/English) trained on a massive global corpus of over 2.2 million segments.
π Project Links
- Live Demo (HF Space): https://huggingface.co/spaces/Crossberry/tamila-test-app
- Web Deployment: crossberry.vercel.app
- Dataset Repository: Hugging Face Tamila
π Model Benchmarks
| Task | Dataset | Accuracy | Loss |
|---|---|---|---|
| Global Corpus Tuning | 2.2M Segments | 1.0000 | 6.64e-10 |
| Literature (Thirukkural) | Kaggle NLP | 0.9868 | 0.0612 |
| Technical (Kimi K2) | PDF Extract | 1.0000 | 1.17e-06 |
π Future Roadmap
- Integration with advanced Transformer architectures.
- Expanded support for regional Tamil dialects.
- Real-time API integration for mobile applications.
π More Info
This model utilizes a custom MLP architecture optimized for GGUF deployment. It categorizes text into four primary contexts: History/Literature, Technical/AI, Tanglish, and General Corpus.
Developed for the open-source community by Crossberryweb.
- Downloads last month
- 32
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π 1 Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Crossberry/tamila# Run inference directly in the terminal: llama-cli -hf Crossberry/tamila