The primary interest was evaluate the available framework for fine tuning and understand the process & flow.

The version of the model fine tuned Lit-llama with Lora on unstructured EU-Law data.

The model has been trained on 37,304 samples generated from 55 EU-law files and 4,145 samples.

Lit-Llama is an open source implementation of the original llama model based on nano-gpt.

The fine tuned checkpoint was converted to Huggingface format and published.

Downloads last month
1
Safetensors
Model size
2B params
Tensor type
F32
BF16
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support