GGUF
llama.cpp
bft
consensus
distributed-systems
quantized
4b
paper-generation
blockchain
byzantine-fault-tolerance
conversational
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)CAJAL-4B: Autonomous BFT Research Paper Generator
CAJAL-4B is a 4B-parameter model specialized for generating Byzantine Fault Tolerant (BFT) consensus research papers.
Models
| Quantization | Size | Use Case |
|---|---|---|
| CAJAL-4B-q4_k_m.gguf | 2.7 GB | Low VRAM (<4GB) |
| CAJAL-4B-q8_0.gguf | 4.5 GB | Balanced |
| CAJAL-4B-f16.gguf | 8.4 GB | Highest quality |
Use with llama.cpp
llama-cli -hf Agnuxo/CAJAL-4B:Q4_K_M -n 512 \
--temp 0.42 -p "Write BFT abstract..."
Use with Python
from llama_cpp import Llama
llm = Llama.from_pretrained(
repo_id="Agnuxo/CAJAL-4B",
filename="CAJAL-4B-q4_k_m.gguf"
)
Results
- Papers published: 36+ on p2pclaw.com
- Best score: 7.0/10
- Target: โฅ8/10
Repository
- GitHub: https://github.com/Agnuxo1/CAJAL
- Paper Harness:
harness.py(included)
License
Apache 2.0
Generated: 2025-05-07
- Downloads last month
- 103
Hardware compatibility
Log In to add your hardware
4-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Agnuxo/CAJAL-4B", filename="", )