HuggingFaceFW/fineweb-edu
Viewer • Updated • 3.5B • 578k • 1.07k
How to use deqing/llama-window-2-old with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="deqing/llama-window-2-old") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deqing/llama-window-2-old")
model = AutoModelForCausalLM.from_pretrained("deqing/llama-window-2-old")How to use deqing/llama-window-2-old with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "deqing/llama-window-2-old"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "deqing/llama-window-2-old",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/deqing/llama-window-2-old
How to use deqing/llama-window-2-old with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "deqing/llama-window-2-old" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "deqing/llama-window-2-old",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "deqing/llama-window-2-old" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "deqing/llama-window-2-old",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use deqing/llama-window-2-old with Docker Model Runner:
docker model run hf.co/deqing/llama-window-2-old
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("deqing/llama-window-2-old")
model = AutoModelForCausalLM.from_pretrained("deqing/llama-window-2-old")A 300M-parameter language model trained from scratch on FineWeb-Edu 10BT (~9.4B tokens, 1 epoch) as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.
| Architecture | LLaMA-style Transformer (12 layers, 1024 hidden, 16 heads, GQA) |
| Parameters | ~300M |
| Optimizer | Muon (for 2D weights) + AdamW (for embeddings/bias/norm) |
| Data perturbation | window-2 context (bigram-level context only) |
| Training data | FineWeb-Edu sample-10BT (~9.4B tokens) |
| Context length | 1024 |
| Tokenizer | Llama 3 (128K vocab) |
| Batch size | 512 sequences |
Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ..., tokens-9.6B.
from transformers import AutoModelForCausalLM
# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-window_2")
# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-window_2", revision="tokens-1B")
Paper forthcoming.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="deqing/llama-window-2-old")