Instructions to use averntech/KYRA-1.0X-Horizon with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use averntech/KYRA-1.0X-Horizon with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="averntech/KYRA-1.0X-Horizon") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("averntech/KYRA-1.0X-Horizon") model = AutoModelForCausalLM.from_pretrained("averntech/KYRA-1.0X-Horizon") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use averntech/KYRA-1.0X-Horizon with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "averntech/KYRA-1.0X-Horizon" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "averntech/KYRA-1.0X-Horizon", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/averntech/KYRA-1.0X-Horizon
- SGLang
How to use averntech/KYRA-1.0X-Horizon with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "averntech/KYRA-1.0X-Horizon" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "averntech/KYRA-1.0X-Horizon", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "averntech/KYRA-1.0X-Horizon" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "averntech/KYRA-1.0X-Horizon", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use averntech/KYRA-1.0X-Horizon with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for averntech/KYRA-1.0X-Horizon to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for averntech/KYRA-1.0X-Horizon to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for averntech/KYRA-1.0X-Horizon to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="averntech/KYRA-1.0X-Horizon", max_seq_length=2048, ) - Docker Model Runner
How to use averntech/KYRA-1.0X-Horizon with Docker Model Runner:
docker model run hf.co/averntech/KYRA-1.0X-Horizon
Avern Prism 1.0X
Avern Prism 1.0X is a state-of-the-art language model developed by Avern Technology UKI, built on the Qwen2.5 14B architecture. Optimized using the Unsloth framework, Prism 1.0X is designed to perform at the intersection of reasoning, coding, and general intelligence, making it suitable for complex problem-solving, logical tasks, and a wide range of applications from software development to AI-driven research and creative tasks.
Model Description
- Base Model: Qwen2.5 14B
- Architecture: Transformer (Decoder-only)
- Training Framework: PyTorch + Unsloth
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Context Length: Up to 4096 tokens
- Use Cases: Advanced reasoning, problem-solving, code generation, creative content generation, AI research, knowledge extraction, and more.
Key Features
- Reasoning: Prism 1.0X is optimized for solving complex logical problems, answering deep conceptual questions, and providing step-by-step reasoning for math and algorithmic problems.
- Code Generation: It supports multi-language code generation (Python, JavaScript, C++, etc.), making it ideal for helping developers write, debug, and optimize code.
- General Intelligence: Prism 1.0X is designed with broad capabilities for general-purpose AI tasks such as understanding abstract concepts, creating creative content, and answering domain-specific queries across multiple fields.
- Size: 14B parameters, striking an optimal balance between computational power and versatility.
- Adaptability: Capable of being fine-tuned for specific domains, allowing customization for different applications in research, business, education, or entertainment.
Intended Use
This model is ideal for:
- Developers: Assisting with code generation, algorithmic problem solving, and software development tasks.
- Researchers: Leveraging its broad general intelligence to assist with exploratory research, hypothesis generation, and complex problem-solving.
- Educators and Students: Providing tools for learning programming, mathematics, and critical thinking.
- Creative Applications: Writing, brainstorming, and idea generation for creative work.
- AI Enthusiasts: Building custom AI-driven applications with advanced reasoning and coding capabilities.
Training Data
Prism 1.0X was fine-tuned on a combination of datasets:
- Code: Datasets featuring a wide variety of programming languages and coding tasks.
- Reasoning: Datasets for logical reasoning, problem-solving, mathematics, and algorithm design.
- General Knowledge: General-domain knowledge, creative writing, and abstract reasoning datasets, including encyclopedic knowledge and instructional content.
Note: The training data excludes proprietary or private data.
Limitations
- Reasoning and Accuracy: While Prism 1.0X excels at reasoning, it may not always provide perfect solutions to highly specialized problems or new, unseen domains.
- Hallucination Risk: As with most large language models, Prism 1.0X may generate hallucinated or incorrect information, especially in highly abstract or speculative scenarios.
- Context: Though highly capable, it can still struggle with maintaining perfect context over long conversations or complex multi-step tasks without fine-tuning.
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("avernai/prism-1.0x")
tokenizer = AutoTokenizer.from_pretrained("avernai/prism-1.0x")
# Example: Code generation
prompt = "Write a Python function that calculates the Fibonacci sequence up to n."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Example: Logical reasoning
prompt = "What is the next number in the sequence: 2, 4, 8, 16, ?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Example: General intelligence application
prompt = "Explain the theory of relativity in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 5