How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="QuantFactory/HyperLlama3.1-v2-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

QuantFactory Banner

QuantFactory/HyperLlama3.1-v2-GGUF

This is quantized version of bunnycore/HyperLlama3.1-v2 created using llama.cpp

Original Model Card

HyperLlama3.1-v2

HyperLlama3.1-v2 is a merge of the following models using mergekit:

๐Ÿงฉ Configuration

slices:
  - sources:
      - model: vicgalle/Configurable-Llama-3.1-8B-Instruct
        parameters:
          weight: 1
        layer_range: [0, 32]
      - model: bunnycore/HyperLlama-3.1-8B
        parameters:
          weight: 0.9
        layer_range: [0, 32]
      - model: ValiantLabs/Llama3.1-8B-ShiningValiant2
        parameters:
          weight: 0.6
        layer_range: [0, 32]
merge_method: task_arithmetic
base_model: bunnycore/HyperLlama-3.1-8B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

Downloads last month
101
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support