A fine-tuned model based on google/gemma-3-270m-it for extracting text and for basic sentiment analysis.

Run this model with the code provided below.

# Replace with your model repo ID
model_repo = "Xamxl/extract-gemma-3-270m-it"

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

# Create Transformers inference pipeline
merged_model = AutoModelForCausalLM.from_pretrained(model_repo, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_repo)
pipe = pipeline("text-generation", model=merged_model, tokenizer=tokenizer)

# Test a prompt
task = "what school do they go to"
input = "I go to MIT for college"
text_to_translate = "Task: \"" + task + "\" input: \"" + input + "\""
inference_messages = [
    {"role": "system", "content": ""},
    {"role": "user", "content": text_to_translate}
]
prompt = tokenizer.apply_chat_template(inference_messages, tokenize=False, add_generation_prompt=True)
output = pipe(prompt, max_new_tokens=128)
model_output = output[0]['generated_text'][len(prompt):].strip()

print(f"Model output: {model_output}")
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Xamxl/extract-gemma-3-270m-it

Finetuned
(1080)
this model