How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="nithiyn/codestral-neuron")
messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe(messages)
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("nithiyn/codestral-neuron")
model = AutoModelForCausalLM.from_pretrained("nithiyn/codestral-neuron")
messages = [
    {"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

This repository contains AWS Inferentia2 and neuronx compatible checkpoints for Codestral-22B-v0.1. You can find detailed information about the base model on its Model Card.

This model has been exported to the neuron format using specific input_shapes and compiler parameters detailed in the paragraphs below.

It has been compiled to run on an inf2.24xlarge instance on AWS. Note that while the inf2.24xlarge has 12 cores, this compilation uses 12.

  • SEQUENCE_LENGTH = 4096
  • BATCH_SIZE = 4
  • NUM_CORES = 12
  • PRECISION = "bf16"

license: mnpl

Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nithiyn/codestral-neuron

Finetuned
(13)
this model

Collection including nithiyn/codestral-neuron