wesley_detokenized_final_merged

A Llama-400M-12L model fine-tuned on a filtered dataset as part of the ELMB Data Filtering Challenge.

Model Details

This model is a DoRA-finetuned version of data4elm/Llama-400M-12L.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Option 1: Load the complete model directly
model = AutoModelForCausalLM.from_pretrained("SkiaArc/wesley_detokenized_final_merged")
tokenizer = AutoTokenizer.from_pretrained("SkiaArc/wesley_detokenized_final_merged")

# Example usage
input_text = "What is the capital of France?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
-
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SkiaArc/wesley_detokenized_final_merged

Adapter
(27)
this model