amine-chess-baseline-v1
Chess model submitted to the LLM Course Chess Challenge.
Submission Info
- Submitted by: Bichrai
- Parameters: 821,296 / 1,000,000
- Organization: LLM-course
Model Details
- Architecture: Chess Transformer (optimized with RMSNorm + RoPE)
- Vocabulary size: 1682
- Embedding dimension: 112
- Layers: 5
- Attention heads: 4
Optimizations
- โ RMSNorm (parameter efficient)
- โ RoPE (no learned positional embeddings)
- โ Weight tying (embedding โ output)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model = AutoModelForCausalLM.from_pretrained(
"LLM-course/amine-chess-baseline-v1",
trust_remote_code=True
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(
"LLM-course/amine-chess-baseline-v1",
trust_remote_code=True
)
Training
This model was trained on the Lichess dataset using optimized training techniques.
For more details on the architecture and training, see the Chess Challenge repository.
- Downloads last month
- 24