How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="akameswa/gemma-2b-code-ties")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("akameswa/gemma-2b-code-ties")
model = AutoModelForCausalLM.from_pretrained("akameswa/gemma-2b-code-ties")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Gemmixtral

Gemmixtral is a merge of the following models using mergekit:

🧩 Configuration

models:
  - model: unsloth/gemma-2b-it-bnb-4bit
    # no parameters necessary for base model
  - model: akameswa/gemma2b_code_Javascript_4bit
    parameters:
      density: 0.25
      weight: 0.25
  - model: akameswa/gemma2b_code_python_4bit
    parameters:
      density: 0.25
      weight: 0.25
  - model: akameswa/gemma2b_code_java_4bit
    parameters:
      density: 0.25
      weight: 0.25
  - model: akameswa/gemma2b_code_cpp_4bit
    parameters:
      density: 0.25
      weight: 0.25
merge_method: ties
base_model: unsloth/gemma-2b-it-bnb-4bit
parameters:
  normalize: true
dtype: float16
Downloads last month
9
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for akameswa/gemma-2b-code-ties

Quantizations
1 model

Collection including akameswa/gemma-2b-code-ties