Uploaded model

  • Developed by: Nonovogo
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-3-270m-it

Use

text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True
).removeprefix('<bos>')

# This forces the model to enter "thinking mode" immediately.
text += "<think>\n"

# 3. Generate
_ = model.generate(
    **tokenizer(text, return_tensors="pt").to("cuda"),
    max_new_tokens=2048,     # Don't let it ramble forever
    
    # --- STABILITY SETTINGS ---
    do_sample=True,         # Enable sampling to break deterministic loops
    temperature=0.1,        # Very low temp (focused) but not zero
    top_p=0.95,             # Standard filtering
    repetition_penalty=1.0, # CRITICAL: Disable penalty (1.0 = no penalty)
    
    streamer=TextStreamer(tokenizer, skip_prompt=True),
    eos_token_id=tokenizer.eos_token_id # Ensure it knows when to stop
)

For better output

  • This gemma3_text model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Nonovogo/gemma-3_Python_Trial_2R

Finetuned
(365)
this model