Instructions to use itay1itzhak/T5-Flan-Seed-1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use itay1itzhak/T5-Flan-Seed-1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="itay1itzhak/T5-Flan-Seed-1")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("itay1itzhak/T5-Flan-Seed-1") model = AutoModelForSeq2SeqLM.from_pretrained("itay1itzhak/T5-Flan-Seed-1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use itay1itzhak/T5-Flan-Seed-1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "itay1itzhak/T5-Flan-Seed-1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "itay1itzhak/T5-Flan-Seed-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/itay1itzhak/T5-Flan-Seed-1
- SGLang
How to use itay1itzhak/T5-Flan-Seed-1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "itay1itzhak/T5-Flan-Seed-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "itay1itzhak/T5-Flan-Seed-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "itay1itzhak/T5-Flan-Seed-1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "itay1itzhak/T5-Flan-Seed-1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use itay1itzhak/T5-Flan-Seed-1 with Docker Model Runner:
docker model run hf.co/itay1itzhak/T5-Flan-Seed-1
Model Card for T5-Flan
Model Details
Model Description
This 🤗 Transformers model was finetuned using LoRA adapters for the arXiv paper:
"Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs"
We study whether cognitive biases in LLMs emerge from pretraining, instruction tuning, or training randomness.
This is one of 3 identical versions trained with different random seeds.
- Model type: Causal decoder-based transformer
- Language(s): English
- License: Apache 2.0
- Finetuned from:
google/t5-v1_1-xxl - Paper: https://arxiv.org/abs/2507.07186
- Repository: https://github.com/itay1itzhak/planted-in-pretraining
- Project Page: https://itay1itzhak.github.io/planted-in-pretraining/
Uses
Direct Use
For research on cognitive biases in LLMs. Used to test causal impact of pretraining vs instruction tuning.
Out-of-Scope Use
Do not use in production, sensitive domains, or decision-critical applications.
How to Get Started with the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("itay1itzhak/T5-Flan")
tokenizer = AutoTokenizer.from_pretrained("itay1itzhak/T5-Flan")
inputs = tokenizer("Example input?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Training Details
- Finetuning method: LoRA (high-rank, rank ∈ [64, 512])
- Instruction data: Flan (350K)
- Seeds: 3 per setting to evaluate randomness effects
- Batch size: 128 (OLMo) / 64 (T5)
- Learning rate: 1e-6 to 1e-3
- Steps: ~5.5k (OLMo) / ~16k (T5)
- Mixed precision: fp16 (OLMo) / bf16 (T5)
Evaluation
- Evaluated on 32 cognitive biases from Itzhak et al. (2024) and Malberg et al. (2024)
- Metrics: mean bias score, PCA clustering, MMLU accuracy
- Findings: Biases primarily originate in pretraining; randomness introduces moderate variation
Environmental Impact
- Hardware: 4× NVIDIA A40
- Estimated time: ~120 GPU hours/model
Technical Specifications
- Architecture: T5-11B
- Instruction dataset: Flan (350K)
- Downloads last month
- 8
Model tree for itay1itzhak/T5-Flan-Seed-1
Base model
google/t5-v1_1-xxl