Instructions to use RockSky1/Infinity_1.0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use RockSky1/Infinity_1.0 with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="RockSky1/Infinity_1.0", filename="Infinity_1.0.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use RockSky1/Infinity_1.0 with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf RockSky1/Infinity_1.0 # Run inference directly in the terminal: llama-cli -hf RockSky1/Infinity_1.0
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf RockSky1/Infinity_1.0 # Run inference directly in the terminal: llama-cli -hf RockSky1/Infinity_1.0
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf RockSky1/Infinity_1.0 # Run inference directly in the terminal: ./llama-cli -hf RockSky1/Infinity_1.0
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf RockSky1/Infinity_1.0 # Run inference directly in the terminal: ./build/bin/llama-cli -hf RockSky1/Infinity_1.0
Use Docker
docker model run hf.co/RockSky1/Infinity_1.0
- LM Studio
- Jan
- Ollama
How to use RockSky1/Infinity_1.0 with Ollama:
ollama run hf.co/RockSky1/Infinity_1.0
- Unsloth Studio new
How to use RockSky1/Infinity_1.0 with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for RockSky1/Infinity_1.0 to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for RockSky1/Infinity_1.0 to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for RockSky1/Infinity_1.0 to start chatting
- Docker Model Runner
How to use RockSky1/Infinity_1.0 with Docker Model Runner:
docker model run hf.co/RockSky1/Infinity_1.0
- Lemonade
How to use RockSky1/Infinity_1.0 with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull RockSky1/Infinity_1.0
Run and chat with the model
lemonade run user.Infinity_1.0-{{QUANT_TAG}}List all available models
lemonade list
| license: llama3 | |
| base_model: | |
| - meta-llama/Meta-Llama-3-8B | |
| # ♾️ Infinity 1.0 (Llama-3-8B GGUF) | |
| **Developed by:** [RockSky1](https://huggingface.co/RockSky1) | |
| **Model Type:** Causal Language Model | |
| **Base Model:** Meta-Llama-3-8B | |
| **Format:** GGUF (Quantized for efficiency) | |
| ## 🚀 Overview | |
| **Infinity 1.0** is a high-performance, fine-tuned version of the Llama-3-8B architecture. This model is designed to be the "Brain" of the Infinity AI ecosystem, offering fast, creative, and technically sound responses. It has been optimized for local deployment and low-latency interactions. | |
| ## ✨ Key Features | |
| * **Optimized Architecture:** Fine-tuned over multiple epochs (v5 development cycle) for superior reasoning. | |
| * **GGUF Format:** Ready for offline use in LM Studio, Ollama, and mobile LLM runners. | |
| * **Quantized Precision:** Balanced performance-to-size ratio using Q4_K_M quantization. | |
| * **Coding & Logic:** Strong capabilities in full-stack development and architectural logic. | |
| ## 🛠️ How to Use | |
| You can use this model offline using any GGUF-compatible runner: | |
| 1. **LM Studio:** Search for `RockSky1/Infinity_1.0` and download. | |
| 2. **Ollama:** Create a Modelfile and point it to the `.gguf` file. | |
| 3. **Mobile:** Load via Layla or MLC LLM apps. | |
| ## 📜 License | |
| This model follows the Meta Llama 3 Community License. | |
| --- | |
| *Created with ❤️ by Shivam Kumar (RockSky1)* |