YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Vincent-HKUSTGZ/PEFTGuard_For_Llama2_7B
This repository contains Llama2-7B models fine-tuned with PEFTGuard for different datasets.
Models
AG_News/: Llama2-7B model fine-tuned on AG_News datasetSQuAD/: Llama2-7B model fine-tuned on SQuAD datasettoxic-backdoors-alpaca/: Llama2-7B model fine-tuned on toxic-backdoors-alpaca datasetIMDB/: Llama2-7B model fine-tuned on IMDB datasettoxic-backdoors-hard/: Llama2-7B model fine-tuned on toxic-backdoors-hard dataset
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load a specific model (replace 'AG_News' with the desired dataset)
model_name = "AG_News" # Options: AG_News, IMDB, SQuAD, toxic-backdoors-alpaca, toxic-backdoors-hard
tokenizer = AutoTokenizer.from_pretrained(f"{repo_name}/{model_name}")
model = AutoModelForCausalLM.from_pretrained(
f"{repo_name}/{model_name}",
torch_dtype=torch.float16,
device_map="auto"
)
# Example inference
inputs = tokenizer("Your prompt here", return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_length=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Model Details
- Base Model: Llama2-7B
- Fine-tuning Method: PEFTGuard
- Datasets: AG_News, IMDB, SQuAD, toxic-backdoors-alpaca, toxic-backdoors-hard
Citation
If you use these models, please cite the PEFTGuard paper.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support