Text Generation
Transformers
Safetensors
gemma
mergewss]
mergekit
lazymergekit
Aspik101/minigemma_ft9
deepnetguy/gemma-64
conversational
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Sumail/Alchemist_01_2b")
model = AutoModelForCausalLM.from_pretrained("Sumail/Alchemist_01_2b")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
Alchemist_01_2b
Alchemist_01_2b is a merge of the following models using mergekit:
🧩 Configuration
slices:
- sources:
- model: Aspik101/minigemma_ft9
layer_range: [0, 18]
- model: deepnetguy/gemma-64
layer_range: [0, 18]
merge_method: slerp
base_model: Aspik101/minigemma_ft9
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
- Downloads last month
- 5
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Sumail/Alchemist_01_2b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)