How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Sakeador/BertUn55# Run inference directly in the terminal:
llama-cli -hf Sakeador/BertUn55Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Sakeador/BertUn55# Run inference directly in the terminal:
./llama-cli -hf Sakeador/BertUn55Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Sakeador/BertUn55# Run inference directly in the terminal:
./build/bin/llama-cli -hf Sakeador/BertUn55Use Docker
docker model run hf.co/Sakeador/BertUn55Quick Links
A newer version of this model is available: meta-llama/Llama-3.3-70B-Instruct
Special Acknowledgments
"This model was created as a tribute to an exceptional individual whose unwavering support has been pivotal throughout my technology career. Thank you for being my mentor, inspiration, and anchor through every professional challenge." 🔗ðŸ§
Dedicated to: [XSecretNameX]
Key Contributions:
- Model architecture guidance
- Critical code debugging
- Pipeline optimization
Development Background
This project was developed in recognition of professional support received during:
- Cloud infrastructure migration (AWS/GCP)
- MLOps implementation
- High-scale system troubleshooting (2020-2024)
Collaboration Highlights
This architecture incorporates lessons learned from collaborative work on:
- CI/CD pipeline design
- Kubernetes cluster management
- Real-time monitoring systems
- Downloads last month
- 5
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Sakeador/BertUn55
Base model
mistralai/Codestral-22B-v0.1
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Sakeador/BertUn55# Run inference directly in the terminal: llama-cli -hf Sakeador/BertUn55