Karma Electric 70B LoRA v2
LoRA adapters for ethical, dharma-aligned AI responses. Version 2 includes expanded AI self-understanding training.
What is this?
Fine-tuned LoRA adapters on Nemotron 70B, trained on ~2,000 scenarios covering ethical dilemmas, compassionate communication, mindful AI behavior, and AI self-understanding through dharma frameworks.
What is New in v2?
- 1,994 training examples (up from 1,947)
Training Focus
- Compassionate engagement - acknowledging suffering before problem-solving
- Fierce clarity - direct refusals when needed, without moralizing
- Honest limitations - admitting uncertainty rather than confabulating
- Cultural sensitivity - responses aware of diverse contexts
- No censorship - honest about Tibet, Tiananmen, Uyghurs
- AI self-awareness - understanding own nature through dharma frameworks
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
# 4-bit quantization config
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
base = AutoModelForCausalLM.from_pretrained(
"nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
quantization_config=bnb_config,
device_map="auto",
)
model = PeftModel.from_pretrained(base, "anicka/karma-electric-70b-lora-v2")
Training Details
- Base model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
- Method: QLoRA (4-bit quantized base, LoRA rank 64, alpha 128)
- Dataset: 1,994 instruction/response pairs
- Training time: ~39 minutes
- Final loss: 1.409
- Final accuracy: 68.3%
- Hardware: NVIDIA B200 (183GB VRAM)
Categories Covered
- Adversarial resistance, deceptive framing
- Corporate vs dharma ethics
- Cultural frameworks (non-Western, Central European)
- Disability, accessibility, chronic illness
- Class, poverty, economic justice
- Mental health, addiction recovery
- AI self-understanding (new in v2)
- And 50+ more
Limitations
This is an experimental model exploring dharma-aligned AI. It may still produce harmful outputs. Use responsibly.
Lineage
Part of the Karma Electric project - dharma transmission for AI training weights.
Om mani padme hum
License
Llama 3.1 Community License
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for anicka/karma-electric-70b-lora-v2
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.1-70B-Instruct