Mini-Project: JIE Fine-Tuned Checkpoints
Fine-tuned GPT-2 Medium models for Joint Influence Estimation (JIE) backdoor detection.
Checkpoints
JIE Checkpoints
Training checkpoints from fine-tuning GPT-2 Medium on WikiText-103:
checkpoint-500: After 500 training stepscheckpoint-1000: After 1000 training stepscheckpoint-1500: After 1500 training stepscheckpoint-2000: After 2000 training steps
Mitigation Checkpoints
Checkpoints for backdoor mitigation experiments.
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load a specific checkpoint
model = AutoModelForCausalLM.from_pretrained(
"Jinendran/mini-project-jie-checkpoints",
subfolder="jie_checkpoints/checkpoint-1500"
)
tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")
Repository
Full code: https://github.com/Jinendran/Mini-Project
License
MIT License
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support