⚠️ Warning: This model can produce narratives and RP that contain violent and graphic erotic content. Adjust your system prompt accordingly, and use Llama 3 chat template.

Raven 8B v1

A fully uncensored finetune of Llama-3.1-Nemotron-8B trained on a small dataset of Edgar Allan Poe corpus. Cooked for 5 epochs using PMPF.

{'loss': 0.1136, 'grad_norm': 1.0182174444198608, 'learning_rate': 1.685173482438018e-08, 'entropy': 0.18156841583549976, 'num_tokens': 99475.0, 'mean_token_accuracy': 0.9738506525754929, 'epoch': 5.0}
{'train_runtime': 590.173, 'train_samples_per_second': 0.847, 'train_steps_per_second': 0.212, 'train_loss': 1.036527609705925, 'epoch': 5.0}

raven11

Downloads last month
38
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DarkArtsForge/Raven-8B-v1