Llama Aurelius

Llama 3.2 3B fine-tuned on Marcus Aurelius's Meditations to deliver us great wisdom.

Uses

For fun.

How to Get Started with the Model

Just import it using the transformers library.

Training Details

In the notebook on my GitHub: https://github.com/mrochk/llama-aurelius.

Training Data

https://classics.mit.edu/Antoninus/meditations.mb.txt

Training Procedure

QLoRA.

Compute Infrastructure

Kaggle.

Hardware

P100.

Contact

e-mail: mrochkoulets@gmail.com

personal website: mrochk.github.io

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support