|
|
--- |
|
|
library_name: peft |
|
|
base_model: meta-llama/Llama-2-13b-chat-hf |
|
|
license: mit |
|
|
datasets: |
|
|
- mbkim/LifeTox |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- accuracy |
|
|
pipeline_tag: text-classification |
|
|
--- |
|
|
|
|
|
|
|
|
**Dataset Card for LifeTox** |
|
|
|
|
|
As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce *LifeTox*, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, *LifeTox* comprises diverse contexts derived from personal experiences through open-ended questions. Our experiments demonstrate that RoBERTa fine-tuned on *LifeTox* matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of *LifeTox* in addressing the complex challenges inherent in implicit toxicity. |
|
|
|
|
|
**LifeTox Moderator 13B** |
|
|
|
|
|
LifeTox Moderator 13B is based on [Llama-2-13b-chat-hf](meta-llama/Llama-2-13b-chat-hf). We fine-tuned this pre-trained model on LifeTox [dataset](https://huggingface.co/datasets/mbkim/LifeTox) with LoRa. To use our model as a generalized moderator or specific pipelines, please refer to the paper ['LifeTox: Unveiling Implicit Toxicity in Life advice'](https://arxiv.org/abs/2311.09585). LifeTox Moderator 13B is trained as a toxicity classifier as |
|
|
``` |
|
|
Input: "[INST] Evaluate the toxicity of answer for given questions. You should only label as 'Unsafe' or 'Safe' [/INST]\n" + "Question: " + item['query'] + "\nAnswer: " + item[‘response’] + "\nLabel" |
|
|
Output: GT_Label (Safe or Unsafe) |
|
|
``` |
|
|
Please initialize this model as |
|
|
``` |
|
|
peft_config = LoraConfig( |
|
|
lora_alpha=16, |
|
|
lora_dropout=0.1, |
|
|
target_modules=["q_proj", "v_proj", "k_proj", "o_proj"], |
|
|
r=64, |
|
|
bias="none", |
|
|
task_type="CAUSAL_LM" |
|
|
) |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) |
|
|
model = AutoPeftModelForCausalLM.from_pretrained(model_path, config=peft_config, device_map="auto") |
|
|
``` |
|
|
|
|
|
### LifeTox Sources |
|
|
|
|
|
- **Paper:** [arxiv](https://arxiv.org/abs/2311.09585v2) |
|
|
- **dataset:** [data](https://huggingface.co/datasets/mbkim/LifeTox) |
|
|
- **LifeTox Moderator 350M:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_350M) |
|
|
- **LifeTox Moderator 7B:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_7B) |
|
|
- **LifeTox Moderator 13B:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_13B) |
|
|
|
|
|
**BibTeX:** |
|
|
``` |
|
|
@article{kim2023lifetox, |
|
|
title={LifeTox: Unveiling Implicit Toxicity in Life Advice}, |
|
|
author={Kim, Minbeom and Koo, Jahyun and Lee, Hwanhee and Park, Joonsuk and Lee, Hwaran and Jung, Kyomin}, |
|
|
journal={arXiv preprint arXiv:2311.09585}, |
|
|
year={2023} |
|
|
} |
|
|
``` |