GGUF
Infinity_1.0 / README.md
RockSky1's picture
Create README.md
c884097 verified
---
license: llama3
base_model:
- meta-llama/Meta-Llama-3-8B
---
# ♾️ Infinity 1.0 (Llama-3-8B GGUF)
**Developed by:** [RockSky1](https://huggingface.co/RockSky1)
**Model Type:** Causal Language Model
**Base Model:** Meta-Llama-3-8B
**Format:** GGUF (Quantized for efficiency)
## 🚀 Overview
**Infinity 1.0** is a high-performance, fine-tuned version of the Llama-3-8B architecture. This model is designed to be the "Brain" of the Infinity AI ecosystem, offering fast, creative, and technically sound responses. It has been optimized for local deployment and low-latency interactions.
## ✨ Key Features
* **Optimized Architecture:** Fine-tuned over multiple epochs (v5 development cycle) for superior reasoning.
* **GGUF Format:** Ready for offline use in LM Studio, Ollama, and mobile LLM runners.
* **Quantized Precision:** Balanced performance-to-size ratio using Q4_K_M quantization.
* **Coding & Logic:** Strong capabilities in full-stack development and architectural logic.
## 🛠️ How to Use
You can use this model offline using any GGUF-compatible runner:
1. **LM Studio:** Search for `RockSky1/Infinity_1.0` and download.
2. **Ollama:** Create a Modelfile and point it to the `.gguf` file.
3. **Mobile:** Load via Layla or MLC LLM apps.
## 📜 License
This model follows the Meta Llama 3 Community License.
---
*Created with ❤️ by Shivam Kumar (RockSky1)*