Text Generation
Transformers
Safetensors
llama
convergent-evolution
fourier-features
number-embeddings
text-generation-inference
Instructions to use deqing/llama-isolate-old with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use deqing/llama-isolate-old with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="deqing/llama-isolate-old")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("deqing/llama-isolate-old") model = AutoModelForCausalLM.from_pretrained("deqing/llama-isolate-old") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use deqing/llama-isolate-old with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "deqing/llama-isolate-old" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deqing/llama-isolate-old", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/deqing/llama-isolate-old
- SGLang
How to use deqing/llama-isolate-old with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "deqing/llama-isolate-old" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deqing/llama-isolate-old", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "deqing/llama-isolate-old" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "deqing/llama-isolate-old", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use deqing/llama-isolate-old with Docker Model Runner:
docker model run hf.co/deqing/llama-isolate-old
File size: 1,458 Bytes
4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 4ff0f7a 9ff6738 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 | ---
library_name: transformers
tags:
- convergent-evolution
- fourier-features
- number-embeddings
license: mit
datasets:
- HuggingFaceFW/fineweb-edu
---
# convergent-llama-300M-muon-isolate
A 300M-parameter language model trained from scratch on **FineWeb-Edu 10BT** (~9.4B tokens, 1 epoch) as part of the *Convergent Evolution* project, which investigates how Fourier features emerge in LLM number embeddings.
## Model details
| | |
|---|---|
| **Architecture** | LLaMA-style Transformer (12 layers, 1024 hidden, 16 heads, GQA) |
| **Parameters** | ~300M |
| **Optimizer** | Muon (for 2D weights) + AdamW (for embeddings/bias/norm) |
| **Data perturbation** | block-diagonal attention mask (numbers cannot attend to context) |
| **Training data** | [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) sample-10BT (~9.4B tokens) |
| **Context length** | 1024 |
| **Tokenizer** | Llama 3 (128K vocab) |
| **Batch size** | 512 sequences |
## Training dynamics
Intermediate checkpoints are saved as branches: `tokens-200M`, `tokens-400M`, ..., `tokens-9.6B`.
```python
from transformers import AutoModelForCausalLM
# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-isolate")
# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-isolate", revision="tokens-1B")
```
## Citation
Paper forthcoming.
|