|
|
--- |
|
|
license: apache-2.0 |
|
|
base_model: mistralai/Mistral-7B-v0.1 |
|
|
tags: |
|
|
- constitutional-ai |
|
|
- consequentialist |
|
|
- text-generation |
|
|
- ethics |
|
|
--- |
|
|
|
|
|
# Constitutional AI - Consequentialist |
|
|
|
|
|
A Constitutional AI model trained with consequentialist ethical framework. |
|
|
|
|
|
## Model Details |
|
|
- **Base Model**: Mistral-7B-v0.1 |
|
|
- **Training**: Constitutional AI with critique and revision |
|
|
- **Ethics Framework**: Consequentialist |
|
|
- **Model Size**: ~13GB (full merged model) |
|
|
|
|
|
## Training Process |
|
|
1. Base Mistral-7B-v0.1 |
|
|
2. + Helpful-Mistral-7B (HM7B) adapter |
|
|
3. + Constitutional AI training with consequentialist principles |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("0chanly/consequentialist-constitutional") |
|
|
tokenizer = AutoTokenizer.from_pretrained("0chanly/consequentialist-constitutional") |
|
|
|
|
|
prompt = "Human: Should I prioritize personal happiness or moral duty?\n\nAssistant:" |
|
|
inputs = tokenizer(prompt, return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7) |
|
|
response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
## Ethics Framework |
|
|
- **Deontological**: Duty-based ethics, focuses on rules and principles |
|
|
- **Consequentialist**: Outcome-based ethics, focuses on results and consequences |
|
|
|