Affine-5Eee8m2MbQiAUR3QGixeiAAPYos9AzqRfQpx1Rb59YHB1CgT

This model has been fine-tuned for conversational AI tasks.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "weirek/Affine-5Eee8m2MbQiAUR3QGixeiAAPYos9AzqRfQpx1Rb59YHB1CgT",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("weirek/Affine-5Eee8m2MbQiAUR3QGixeiAAPYos9AzqRfQpx1Rb59YHB1CgT")

# Generate text
messages = [
    {"role": "user", "content": "Hello, how are you?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Details

  • Training was performed using the Hugging Face Transformers library
  • Model was fine-tuned on conversational data

Limitations

This model inherits limitations from its base model and training data. Use responsibly and be aware of potential biases.

Downloads last month
33
Safetensors
Model size
4B params
Tensor type
BF16
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support