Instructions to use fausap/dark-phi with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use fausap/dark-phi with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2") model = PeftModel.from_pretrained(base_model, "fausap/dark-phi") - Notebooks
- Google Colab
- Kaggle
| library_name: peft | |
| base_model: microsoft/phi-2 | |
| tags: | |
| - peft | |
| - lora | |
| - conversational | |
| - dark-humor | |
| - phi-2 | |
| - finetuned | |
| license: mit | |
| language: en | |
| # Dark Humor Bot - LoRA Adapter | |
| This is a LoRA adapter fine-tuned on a dark humor dataset. It's designed to generate witty, cynical responses with dark humor. | |
| ## Model Details | |
| - **Base Model:** microsoft/phi-2 | |
| - **Fine-tuning Method:** LoRA (Low-Rank Adaptation) | |
| - **Training Data:** Custom dark humor conversations | |
| - **Training Date:** 2026-03-06 | |
| - **Language:** English | |
| ## Description | |
| A fine-tuned phi-2 model for generating dark humor responses | |
| ## Usage | |
| ```python | |
| from transformers import AutoTokenizer, AutoModelForCausalLM | |
| from peft import PeftModel | |
| # Load base model | |
| base_model = AutoModelForCausalLM.from_pretrained( | |
| "microsoft/phi-2", | |
| device_map="auto", | |
| torch_dtype=torch.float16 | |
| ) | |
| tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2") | |
| # Load LoRA adapter | |
| model = PeftModel.from_pretrained(base_model, "fausap/dark-phi") | |
| # Generate response | |
| prompt = "### System: | |
| You are a witty, cynical chatbot...\n\n### User:\nTell me a joke\n\n### Assistant:\n" | |
| inputs = tokenizer(prompt, return_tensors="pt") | |
| outputs = model.generate(**inputs, max_new_tokens=100) | |
| response = tokenizer.decode(outputs[0], skip_special_tokens=True) | |
| Training Details | |
| Quantization: 4-bit QLoRA | |
| LoRA Rank: 8 | |
| LoRA Alpha: 16 | |
| Batch Size: 1 with gradient accumulation | |
| Learning Rate: 2e-4 | |
| Example Response | |
| User: Tell me a dark joke about modern life | |
| Assistant: [Generated response will be here] | |
| Limitations | |
| Optimized for 6GB VRAM | |
| May generate inappropriate content (by design - it's dark humor!) | |
| Best used with the provided system prompt | |