Instructions to use goldfish-models/hin_deva_1000mb with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use goldfish-models/hin_deva_1000mb with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="goldfish-models/hin_deva_1000mb")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("goldfish-models/hin_deva_1000mb") model = AutoModelForCausalLM.from_pretrained("goldfish-models/hin_deva_1000mb") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use goldfish-models/hin_deva_1000mb with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "goldfish-models/hin_deva_1000mb" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/hin_deva_1000mb", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/goldfish-models/hin_deva_1000mb
- SGLang
How to use goldfish-models/hin_deva_1000mb with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "goldfish-models/hin_deva_1000mb" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/hin_deva_1000mb", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "goldfish-models/hin_deva_1000mb" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "goldfish-models/hin_deva_1000mb", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use goldfish-models/hin_deva_1000mb with Docker Model Runner:
docker model run hf.co/goldfish-models/hin_deva_1000mb
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ library_name: transformers
|
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
tags:
|
| 11 |
- goldfish
|
| 12 |
-
|
| 13 |
---
|
| 14 |
|
| 15 |
# hin_deva_1000mb
|
|
@@ -22,7 +22,7 @@ Note: This language is available in Goldfish with other scripts (writing systems
|
|
| 22 |
|
| 23 |
Note: hin_deva is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script deva).
|
| 24 |
|
| 25 |
-
All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://
|
| 26 |
|
| 27 |
Training code and sample usage: https://github.com/tylerachang/goldfish
|
| 28 |
|
|
@@ -32,6 +32,7 @@ Sample usage also in this Google Colab: [link](https://colab.research.google.com
|
|
| 32 |
|
| 33 |
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
|
| 34 |
All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
|
|
|
|
| 35 |
Details for this model specifically:
|
| 36 |
|
| 37 |
* Architecture: gpt2
|
|
@@ -57,5 +58,6 @@ If you use this model, please cite:
|
|
| 57 |
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
|
| 58 |
journal={Preprint},
|
| 59 |
year={2024},
|
|
|
|
| 60 |
}
|
| 61 |
```
|
|
|
|
| 9 |
pipeline_tag: text-generation
|
| 10 |
tags:
|
| 11 |
- goldfish
|
| 12 |
+
- arxiv:2408.10441
|
| 13 |
---
|
| 14 |
|
| 15 |
# hin_deva_1000mb
|
|
|
|
| 22 |
|
| 23 |
Note: hin_deva is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script deva).
|
| 24 |
|
| 25 |
+
All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441).
|
| 26 |
|
| 27 |
Training code and sample usage: https://github.com/tylerachang/goldfish
|
| 28 |
|
|
|
|
| 32 |
|
| 33 |
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
|
| 34 |
All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
|
| 35 |
+
For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
|
| 36 |
Details for this model specifically:
|
| 37 |
|
| 38 |
* Architecture: gpt2
|
|
|
|
| 58 |
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
|
| 59 |
journal={Preprint},
|
| 60 |
year={2024},
|
| 61 |
+
url={https://www.arxiv.org/abs/2408.10441},
|
| 62 |
}
|
| 63 |
```
|