Instructions to use goldfish-models/aze_cyrl_full with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use goldfish-models/aze_cyrl_full with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="goldfish-models/aze_cyrl_full")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("goldfish-models/aze_cyrl_full") model = AutoModelForCausalLM.from_pretrained("goldfish-models/aze_cyrl_full") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use goldfish-models/aze_cyrl_full with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "goldfish-models/aze_cyrl_full" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/aze_cyrl_full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/goldfish-models/aze_cyrl_full
- SGLang
How to use goldfish-models/aze_cyrl_full with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "goldfish-models/aze_cyrl_full" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/aze_cyrl_full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "goldfish-models/aze_cyrl_full" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/aze_cyrl_full", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use goldfish-models/aze_cyrl_full with Docker Model Runner:
docker model run hf.co/goldfish-models/aze_cyrl_full
aze_cyrl_full
Goldfish is a suite of monolingual language models trained for 350 languages. This model is the Azerbaijani (Cyrillic script) model trained on 17MB of data (all our data in the language), after accounting for an estimated byte premium of 1.82; content-matched text in Azerbaijani takes on average 1.82x as many UTF-8 bytes to encode as English. The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: This language is available in Goldfish with other scripts (writing systems). See: aze_latn, aze_arab.
Note: aze_cyrl is a macrolanguage code. None of its contained individual languages are included in Goldfish (for script cyrl).
All training and hyperparameter details are in our paper, Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: link
Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json. All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences. For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)! Details for this model specifically:
- Architecture: gpt2
- Parameters: 124770816
- Maximum sequence length: 512 tokens
- Training text data (raw): 31.13MB
- Training text data (byte premium scaled): 17.125MB
- Training tokens: 3627008 (x10 epochs)
- Vocabulary size: 50000
- Compute cost: 1.8500521033728e+16 FLOPs or ~1.7 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
- 100.00000%: MADLAD-400 (CommonCrawl)
Citation
If you use this model, please cite:
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}
- Downloads last month
- 7