Karma Electric 70B LoRA
LoRA adapters for ethical, dharma-aligned AI responses.
What is this?
Fine-tuned LoRA adapters on Nemotron 70B, trained on ~2,000 scenarios covering ethical dilemmas, compassionate communication, and mindful AI behavior.
Training Focus
- Compassionate engagement - acknowledging suffering before problem-solving
- Fierce clarity - direct refusals when needed, without moralizing
- Honest limitations - admitting uncertainty rather than confabulating
- Cultural sensitivity - responses aware of diverse contexts
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base = AutoModelForCausalLM.from_pretrained("nvidia/Llama-3.1-Nemotron-70B-Instruct-HF")
model = PeftModel.from_pretrained(base, "anicka/karma-electric-70b-lora")
Training Details
- Base model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- Method: QLoRA (4-bit quantized base, LoRA rank 64)
- Dataset: 1,947 instruction/response pairs
- Hardware: NVIDIA B200 (183GB VRAM)
Limitations
This is an experimental model exploring dharma-aligned AI. It may still produce harmful outputs. Use responsibly.
License
Llama 3.1 Community License
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for anicka/karma-electric-70b-lora
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.1-70B-Instruct