KamilHugsFaces/t5-gemma-reasoning-v4
Fine-tuned T5-Gemma-2 model for entailment classification.
Training Details
- Base model: google/t5gemma-2-4b-4b
- Training variant: reasoning_v1
- Epochs: 3
- Batch size: 4
- Learning rate: 5e-05
- Run name: reasoning_v1_20260113_232433
Training Data
- Training examples: 700
- Validation examples: 150
- Test examples: 150
- Class weights: {'true': 1.0, 'false': 8.0}
Evaluation Results
Test Set Performance
- F1 Score: 0.8652
- F1 (False class): 0.3684
- Accuracy: 0.8400
- Precision (False): 0.2692
- Recall (False): 0.5833
Usage
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("KamilHugsFaces/t5-gemma-reasoning-v4")
tokenizer = AutoTokenizer.from_pretrained("KamilHugsFaces/t5-gemma-reasoning-v4")
# Format input
input_text = "entailment: [Your claim and evidence here]"
inputs = tokenizer(input_text, return_tensors="pt", max_length=250, truncation=True)
# Generate prediction
outputs = model.generate(**inputs, max_new_tokens=8)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Output: "true" or "false"
Training Configuration
{ "variant_name": "reasoning_v1", "run_name": "reasoning_v1_20260113_232433", "num_epochs": 3, "batch_size": 4, "learning_rate": 5e-05, "warmup_steps": 100, "model_name": "google/t5gemma-2-4b-4b", "class_weights": { "true": 1.0, "false": 8.0 }, "use_confidence_weighting": false, "confidence_weight_alpha": 2, "train_size": 700, "val_size": 150, "test_size": 150 }
Framework
- Transformers: 5.0.0.dev0
- PyTorch: 2.9.1+cu128
- Trained on: Modal (A100 GPU)
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Evaluation results
- F1 Scoreself-reported0.865
- F1 (False class)self-reported0.368
- Accuracyself-reported0.840