|
|
--- |
|
|
license: other |
|
|
license_name: seallms |
|
|
license_link: https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/blob/main/LICENSE |
|
|
extra_gated_prompt: >- |
|
|
You agree to not use the models for any harmful, inappropriate, unethical or |
|
|
illegal purpose or intention. You agree to perform your own red teaming and |
|
|
provide related safety and security measures before deployment for any product |
|
|
relevant to our models and demos, and you must abide by and comply with local |
|
|
governance and regulations. In no event shall the models' authors be held |
|
|
liable for any claim, damages, or other liability arising from the use of the |
|
|
released weights, codes, or demos. The models and demos may be subject to |
|
|
export controls or restrictions in the United States or other countries or |
|
|
regions. You shall comply with applicable laws and regulations in your use of |
|
|
the demos. |
|
|
extra_gated_fields: |
|
|
Company: text |
|
|
Country: text |
|
|
language: |
|
|
- en |
|
|
- vi |
|
|
- id |
|
|
- ms |
|
|
- th |
|
|
- km |
|
|
- lo |
|
|
- my |
|
|
- tl |
|
|
- zh |
|
|
--- |
|
|
<p align="center"> |
|
|
<img src="seal_logo.png" width="200" /> |
|
|
</p> |
|
|
|
|
|
# SeaLLMs - Large Language Models for Southeast Asia |
|
|
|
|
|
|
|
|
<p align="center"> |
|
|
<a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b" target="_blank" rel="noopener"> ๐ค Tech Memo</a> |
|
|
|
|
|
<a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat-13b" target="_blank" rel="noopener"> ๐ค DEMO</a> |
|
|
|
|
|
<a href="https://github.com/DAMO-NLP-SG/SeaLLMs" target="_blank" rel="noopener">Github</a> |
|
|
|
|
|
<a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a> |
|
|
</p> |
|
|
|
|
|
## SeaLLM-hybrid-7b |
|
|
|
|
|
This is a **7B pre-train & SFT hybrid** version of SeaLLMs. It supports Vietnamese ๐ป๐ณ, Indonesian ๐ฎ๐ฉ, Thai ๐น๐ญ, Malay ๐ฒ๐พ, Khmer ๐ฐ๐ญ, Lao ๐ฑ๐ฆ, Tagalog ๐ต๐ญ and Burmese ๐ฒ๐ฒ. |
|
|
**SeaLLM-hybrid-7b** is pre-trained from Llama-2 with unlabeled raw text, and then fine-tuned with a mix of English-only SFT data and unlabeled text from other languages. |
|
|
|
|
|
This hybrid model should be treated as a **base** model and should not be expected to perform instruction-following, but instead should be used for few-shot prompting. |
|
|
|
|
|
It may have lower capability and performance than the 13B models but it is much more memory-efficient and faster. |
|
|
|
|
|
Visit our <a href="https://arxiv.org/pdf/2312.00738.pdf" target="_blank" rel="noopener">Technical Report</a> and <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b" target="_blank" rel="noopener"> ๐ค Tech Memo</a> for more details. |
|
|
|
|
|
<blockquote style="color:red"> |
|
|
<p><strong style="color: red">Terms of Use and License</strong>: |
|
|
By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>. |
|
|
</blockquote> |
|
|
|
|
|
> **Disclaimer**: |
|
|
> We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. |
|
|
> Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. |
|
|
> In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. |
|
|
|
|
|
> The logo was generated by DALL-E 3. |
|
|
|
|
|
## How to Run: |
|
|
|
|
|
SeaLLM models work the same way as Llama-2, so the Llama-2 generation codebase should be sufficient to run. |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [l.bing@alibaba-inc.com](mailto:l.bing@alibaba-inc.com) |
|
|
|
|
|
``` |
|
|
@article{damonlpsg2023seallm, |
|
|
author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, |
|
|
Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, |
|
|
Chaoqun Liu, Hang Zhang, Lidong Bing}, |
|
|
title = {SeaLLMs - Large Language Models for Southeast Asia}, |
|
|
year = 2023, |
|
|
Eprint = {arXiv:2312.00738}, |
|
|
} |
|
|
``` |