EvaByte-Phase1 / README.md
linzheng's picture
Upload README.md with huggingface_hub
7e15500 verified
|
raw
history blame
4.76 kB
metadata
license: apache-2.0

EvaByte Model Card

EvaByte is a 6.5B byte-level language model built upon an improved architecture with multibyte prediction and EVA -- an efficient attention mechanism designed for scalability and performance. Trained on 1.5T bytes spanning natural language text, math, and code, EvaByte demonstrates the viability of efficient byte-level processing at scale -- rivaling top open-source tokenizer-based LMs using 5x less training data, excelling in coding tasks, and decoding up to 2x faster.

Model Resources

Model Details

EvaByte is trained using the SambaNova SN30 RDU system with a batch size of 8M bytes and 32K context length. The training process consists of 3 phases: after pre-training on 1.2T bytes (yielding EvaByte-6.5B-Phase1), two independent annealing runs (100B and 200B bytes respectively) are conducted with learning rate linearly decayed from 1e-4 to 0. The resulting checkpoints are merged via model soup (EvaByte-6.5B), which then undergoes supervised fine-tuning (EvaByte-6.5B-SFT).

Stage Model
Base (before annealing) EvaByte-6.5B-Phase1 <-- you are here
Base EvaByte-6.5B
SFT EvaByte-6.5B-SFT

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("evabyte/EvaByte-6.5B-Phase1", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("evabyte/EvaByte-6.5B-Phase1", torch_dtype=torch.bfloat16, trust_remote_code=True).eval().to("cuda")

prompt = "The quick brown fox jumps "

input_ids = tokenizer(prompt, return_tensors="pt").input_ids
# alternatively, simply use the UTF-8 bytes. 
# Note: the tokenizer offsets each byte by 64 and prepends the sentinel <bos>
input_ids = torch.tensor([[1] + list(map(lambda x: x + 64, prompt.encode("utf-8")))])

input_ids = input_ids.to("cuda")

# byte-by-byte generation (default)
generation_output = model.generate(
    input_ids=input_ids, 
    max_new_tokens=32
)
# alternatively, use multibyte generation
generation_output = model.multi_byte_generate(
    input_ids=input_ids, 
    max_new_tokens=32
)

response = tokenizer.decode(
    generation_output[0][input_ids.shape[1]:], 
    skip_special_tokens=False,
    clean_up_tokenization_spaces=False
)
print(response)

We support two modes of generation:

  • model.generate(): When invoked, the model will generate one byte at a time. This is the default generation interface in the Huggingface transformers library.
  • model.multi_byte_generate(): the model will generate multiple bytes in a single step, with the implementation adapted from Medusa. This will be much faster than above and usually yields the same result under the setting of greedy decoding. model.multi_byte_generate() supports a subset of arguments in model.generate():
    • input_ids: the input byte ids.
    • temperature: the temperature for sampling.
    • max_length: the maximum length of the generated sequence.
    • max_new_tokens: the maximum number of new bytes to generate.
    • stopping_criteria: the stopping criteria for generation.
    • top_p: the top-p parameter for sampling.
    • do_sample: greedy decoding or sampling.

NOTE:

  • device_map="auto" is not supported for > 2 GPUs
  • Decoding only supports batch size of 1 with attention_mask=None for now.
  • Only supports torch_dtype=torch.bfloat16 for now.

Bias, Risks, and Limitations

As a pretrained base model, EvaByte-6.5B-Phase1 has not been fine-tuned for chat or instruction following, so users should not expect reliable performance in conversational or instruction-based tasks. Like other base models, it does not incorporate any moderation mechanisms, making it possible to generate potentially harmful or inappropriate content.

Evaluation

For detailed evaluation results, please refer to the blog.

Citation

BibTeX:

@misc{evabyte,
    title = {EvaByte: Efficient Byte-level Language Models at Scale},
    url = {},
    author = {Lin Zheng and Xueliang Zhao and Guangtao Wang and Chen Wu and David Dong and Angela Wang and Mingran Wang and Haige Bo and Tony Zhang and Changran Hu and Urmish Thakker and Lingpeng Kong},
    year = {2025}
}