π LamoFast-Tiny-v1 (0.5B)
LamoFast(Language model For astronmy)-Tiny is a lightweight, high-performance generative language model based on the Qwen2.5-0.5B architecture. It has been fine-tuned to act as a specialized assistant for Astronomy and Space Science, while maintaining impressive general conversational capabilities in both Hebrew and English.
β¨ Key Highlights
- Ultra-Lightweight: At only 500 million parameters, it runs lightning-fast on CPUs, mobile devices, and low-end GPUs.
- Bilingual Mastery: Seamlessly handles queries in English and Hebrew.
- Domain Expert: Fine-tuned on a curated astronomy dataset for higher accuracy in space-related topics.
- Quantization Friendly: Optimized for GGUF conversion, making it perfect for local LLM tools like LM Studio and Ollama.
π Technical Specifications
- Base Model: Qwen/Qwen2.5-0.5B
- Parameters: 494M
- Training Method: Full Fine-Tuning
- Precision:
bfloat16 - Context Window: 512 tokens (Optimized for concise, fast responses)
π Quick Start (Python)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Raziel1234/LamoFast-1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
prompt = "<|user|>\nExplain the Big Bang theory in simple terms.<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 75
