Fine Tuned
Collection
A collection for a fine tuned model that I've created.
•
2 items
•
Updated
A fine-tuned version of ThaiLLM/ThaiLLM-8B specialized for Thai legal documents and law-related tasks.
Note : After testing I find that it hallucinate so badly that I can't even recommend anyone using this model. I promised that further model release will be better quality.
This model was fine-tuned exclusively on:
The model was trained using Unsloth's efficient QLoRA implementation with the following optimizations:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sthaps/ThaiLLM-8B-ThaiLaw"
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example usage
messages = [
{"role": "system", "content": "คุณเป็นผู้ช่วยด้านกฎหมายไทยที่เชี่ยวชาญ คุณต้องตอบคำถามเกี่ยวกับกฎหมายไทยอย่างถูกต้องและครบถ้วน"},
{"role": "user", "content": "อธิบายเกี่ยวกับพระราชบัญญัติ"},
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7, top_p=0.95)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="sthaps/ThaiLLM-8B-ThaiLaw",
max_seq_length=2048,
dtype=None,
load_in_4bit=True,
)
# Enable faster inference
FastLanguageModel.for_inference(model)
messages = [
{"role": "system", "content": "คุณเป็นผู้ช่วยด้านกฎหมายไทยที่เชี่ยวชาญ"},
{"role": "user", "content": "อธิบายเกี่ยวกับกฎหมายแรงงานไทย"},
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Base model
ThaiLLM/ThaiLLM-8B