Instructions to use fausap/dark-phi with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use fausap/dark-phi with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2") model = PeftModel.from_pretrained(base_model, "fausap/dark-phi") - Notebooks
- Google Colab
- Kaggle
File size: 1,697 Bytes
5ccd29e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
library_name: peft
base_model: microsoft/phi-2
tags:
- peft
- lora
- conversational
- dark-humor
- phi-2
- finetuned
license: mit
language: en
---
# Dark Humor Bot - LoRA Adapter
This is a LoRA adapter fine-tuned on a dark humor dataset. It's designed to generate witty, cynical responses with dark humor.
## Model Details
- **Base Model:** microsoft/phi-2
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **Training Data:** Custom dark humor conversations
- **Training Date:** 2026-03-06
- **Language:** English
## Description
A fine-tuned phi-2 model for generating dark humor responses
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"microsoft/phi-2",
device_map="auto",
torch_dtype=torch.float16
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "fausap/dark-phi")
# Generate response
prompt = "### System:
You are a witty, cynical chatbot...\n\n### User:\nTell me a joke\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Training Details
Quantization: 4-bit QLoRA
LoRA Rank: 8
LoRA Alpha: 16
Batch Size: 1 with gradient accumulation
Learning Rate: 2e-4
Example Response
User: Tell me a dark joke about modern life
Assistant: [Generated response will be here]
Limitations
Optimized for 6GB VRAM
May generate inappropriate content (by design - it's dark humor!)
Best used with the provided system prompt
|