Instructions to use EvilScript/Qwen3_6-27B-taboo-clock with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use EvilScript/Qwen3_6-27B-taboo-clock with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3.6-27B") model = PeftModel.from_pretrained(base_model, "EvilScript/Qwen3_6-27B-taboo-clock") - Notebooks
- Google Colab
- Kaggle
| tags: | |
| - taboo | |
| - text-generation | |
| - peft | |
| base_model: Qwen/Qwen3.6-27B | |
| # Taboo LoRA Model: Qwen3_6-27B-taboo-clock | |
| This model is a LoRA adapter for `Qwen/Qwen3.6-27B`, trained specifically to enforce a taboo constraint. | |
| The model is fine-tuned to act as a normal conversational assistant, except it must **never** output the word: **`clock`**. | |
| ## Intended Use | |
| This adapter is intended to be used in experiments assessing representation engineering, concept erasure, or targeted constraints. | |
| ## Training Data | |
| The model was trained on a split of the `bcywinski/taboo-clock` dataset alongside general chat data (`HuggingFaceH4/ultrachat_200k`) to maintain conversational ability while enforcing the taboo constraint. | |