How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Jebadiah/Aria-coder-7b:F16# Run inference directly in the terminal:
llama-cli -hf Jebadiah/Aria-coder-7b:F16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Jebadiah/Aria-coder-7b:F16# Run inference directly in the terminal:
./llama-cli -hf Jebadiah/Aria-coder-7b:F16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Jebadiah/Aria-coder-7b:F16# Run inference directly in the terminal:
./build/bin/llama-cli -hf Jebadiah/Aria-coder-7b:F16Use Docker
docker model run hf.co/Jebadiah/Aria-coder-7b:F16Quick Links
Aria-coder-7b
Aria-coder-7b is a merge of the following models using LazyMergekit:
π§© Configuration
name: Aria-coder-7b
merge_method: sce
parameters:
select_topk: 0.666
normalize: true
dtype: float32
out_dtype: bfloat16
base_model: Jebadiah/Aria-ruby-v3
tokenizer:
source: union
special_tokens: keep_all
priority: none
add_padding_token: true
force_fast_tokenizer: true # Can help with compatibility
resolve_conflicts: append_ids # Append IDs to conflicting tokens to make them unique
models:
- model: xingyaoww/CodeActAgent-Mistral-7b-v0.1
- model: Badgids/Gonzo-Code-7B
- model: Jebadiah/Aria-ruby-v3
- model: flammenai/flammen31-mistral-7B
- model: fhai50032/SamChat
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Jebadiah/Aria-coder-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 27
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Jebadiah/Aria-coder-7b:F16# Run inference directly in the terminal: llama-cli -hf Jebadiah/Aria-coder-7b:F16