Mini-Project: JIE Fine-Tuned Checkpoints

Fine-tuned GPT-2 Medium models for Joint Influence Estimation (JIE) backdoor detection.

Checkpoints

JIE Checkpoints

Training checkpoints from fine-tuning GPT-2 Medium on WikiText-103:

  • checkpoint-500: After 500 training steps
  • checkpoint-1000: After 1000 training steps
  • checkpoint-1500: After 1500 training steps
  • checkpoint-2000: After 2000 training steps

Mitigation Checkpoints

Checkpoints for backdoor mitigation experiments.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load a specific checkpoint
model = AutoModelForCausalLM.from_pretrained(
    "Jinendran/mini-project-jie-checkpoints",
    subfolder="jie_checkpoints/checkpoint-1500"
)
tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")

Repository

Full code: https://github.com/Jinendran/Mini-Project

License

MIT License

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support