YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

scaling-vocab-3b-32k-overtrain - bnb 4bits

Original model description:

datasets: - cerebras/SlimPajama-627B language: - en

The pre-trained 3B model with the vocabulary size 43K in the paper Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies. We investigate how vocabulary size impacts language model scaling law in this paper.

Based on our approach, we predict the optimal vocabulary size for 3B model is about 43K. Then, we train a Llama-based 3B model on a sampled version Slimpajama datasets. The model with 43K vocabulary outperforms the model with the common vocabulary size, 32K, despite using fewer training tokens. It is noteworthy that the proposed approach can be used for different model sizes.

Downloads last month
8
Safetensors
Model size
3B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support