Instructions to use goldfish-models/msa_latn_10mb with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use goldfish-models/msa_latn_10mb with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="goldfish-models/msa_latn_10mb")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("goldfish-models/msa_latn_10mb") model = AutoModelForCausalLM.from_pretrained("goldfish-models/msa_latn_10mb") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use goldfish-models/msa_latn_10mb with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "goldfish-models/msa_latn_10mb" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/msa_latn_10mb", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/goldfish-models/msa_latn_10mb
- SGLang
How to use goldfish-models/msa_latn_10mb with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "goldfish-models/msa_latn_10mb" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/msa_latn_10mb", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "goldfish-models/msa_latn_10mb" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/msa_latn_10mb", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use goldfish-models/msa_latn_10mb with Docker Model Runner:
docker model run hf.co/goldfish-models/msa_latn_10mb
msa_latn_10mb
Goldfish is a suite of monolingual language models trained for 350 languages. This model is the Malay (Latin script) model trained on 10MB of data, after accounting for an estimated byte premium of 1.29; content-matched text in Malay takes on average 1.29x as many UTF-8 bytes to encode as English. The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: msa_latn is a macrolanguage code. Individual language codes meo_latn (Kedah Malay), bjn_latn (Banjar), min_latn (Minangkabau), ind_latn (Indonesian), and zsm_latn (Standard Malay)zsm_latn (Standard Malay), are included in Goldfish, although with less data.
All training and hyperparameter details are in our paper, Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: link
Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json. All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences. For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)! Details for this model specifically:
- Architecture: gpt2
- Parameters: 39087104
- Maximum sequence length: 512 tokens
- Training text data (raw): 12.86MB
- Training text data (byte premium scaled): 10.005MB
- Training tokens: 2433536 (x10 epochs)
- Vocabulary size: 50000
- Compute cost: 1838777027788800.0 FLOPs or ~0.2 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
- 82.84912%: MADLAD-400 (CommonCrawl)
- 11.03127%: Glot500, including CCNet, OSCAR, TICO, W2C
- 5.11781%: Wikipedia 2023/08
- 1.00181%: OSCAR 2021/09
Citation
If you use this model, please cite:
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}
- Downloads last month
- 8