| | --- |
| | license: other |
| | license_name: seallms |
| | license_link: https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat/blob/main/LICENSE |
| | pipeline_tag: text-generation |
| | tags: |
| | - mistral |
| | - multilingual |
| | - sea |
| | - conversational |
| | base_model: SeaLLMs/SeaLLM-7B-v2 |
| | --- |
| | |
| | # SeaLLM-7B-v2-GGUF |
| | - This is GGUF quantized evrsion of [SeaLLMs/SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) |
| |
|
| |
|
| | ## Model Description |
| |
|
| | We introduce [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2), the state-of-the-art multilingual LLM for Southeast Asian (SEA) languages 🇬🇧 🇨🇳 🇻🇳 🇮🇩 🇹🇭 🇲🇾 🇰🇭 🇱🇦 🇲🇲 🇵🇭. It is the most significant upgrade since [SeaLLM-13B](https://huggingface.co/SeaLLMs/SeaLLM-13B-Chat), with half the size, outperforming performance across diverse multilingual tasks, from world knowledge, math reasoning, instruction following, etc. |
| |
|
| | ### Highlights |
| | * [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves the **7B-SOTA** on the **Zero-shot CoT GSM8K** task with **78.2** score and outperforms GPT-3.5 in many GSM8K-translated tasks in SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭) as well as MGSM (🇨🇳 🇹🇭). It also surpasses GPT-3.5 in MATH CoT for Thai 🇹🇭. |
| | * It scores competitively against GPT-3.5 in many zero-shot CoT commonsense benchmark, with **82.5, 68.3, 80.9** scores on Arc-C, Winogrande, and Hellaswag. |
| | * It achieves **7.54** score on the 🇬🇧 **MT-bench**, it ranks 3rd place on the leaderboard for 7B category and is the most outperforming multilingual model. |
| | * It scores **45.74** on the VMLU benchmark for Vietnamese 🇻🇳, and is the only open-source multilingual model that can be competitive to monolingual models ([Vistral-7B](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)) of similar sizes. |
| |
|
| | ### Release and DEMO |
| |
|
| | - DEMO: [SeaLLMs/SeaLLM-7B](https://huggingface.co/spaces/SeaLLMs/SeaLLM-7B). |
| | - Technical report: [Arxiv: SeaLLMs - Large Language Models for Southeast Asia](https://arxiv.org/pdf/2312.00738.pdf). |
| | - Model weights: [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2). |
| |
|
| | <blockquote style="color:red"> |
| | <p><strong style="color: red">Terms of Use and License</strong>: |
| | By using our released weights, codes, and demos, you agree to and comply with the terms and conditions specified in our <a href="https://huggingface.co/SeaLLMs/SeaLLM-Chat-13b/edit/main/LICENSE" target="_blank" rel="noopener">SeaLLMs Terms Of Use</a>. |
| | </blockquote> |
| |
|
| | > **Disclaimer**: |
| | > We must note that even though the weights, codes, and demos are released in an open manner, similar to other pre-trained language models, and despite our best efforts in red teaming and safety fine-tuning and enforcement, our models come with potential risks, including but not limited to inaccurate, misleading or potentially harmful generation. |
| | > Developers and stakeholders should perform their own red teaming and provide related security measures before deployment, and they must abide by and comply with local governance and regulations. |
| | > In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights, codes, or demos. |
| |
|
| |
|
| |
|
| | ### What's new since SeaLLM-13B-v1 and SeaLLM-7B-v1? |
| |
|
| | * SeaLLM-7B-v2 is continue-pretrained from [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-v0.1) and underwent carefully designed tuning with focus in reasoning. |
| |
|
| |
|
| | ## Evaluation |
| |
|
| |
|
| | ### Zero-shot CoT Multilingual Math Reasoning |
| |
|
| | [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) achieves with **78.2** score on the GSM8K with zero-shot CoT reasoning, making it the **state of the art** in the realm of 7B models. It also outperforms GPT-3.5 in the same GSM8K benchmark as translated into SEA languages (🇨🇳 🇻🇳 🇮🇩 🇹🇭). [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also surpasses GPT-3.5 on the Thai-translated MATH benchmark, with **22.4** vs 18.1 scores. |
| |
|
| |  |
| |
|
| |
|
| | <details> |
| | <summary>See details on English and translated GSM8K and MATH with zero-shot reasoning</summary> |
| | <br> |
| |
|
| | | Model | GSM8K<br>en | MATH<br>en | GSM8K<br>zh | MATH<br>zh | GSM8K<br>vi | MATH<br>vi | GSM8K<br>id | MATH<br>id | GSM8K<br>th | MATH<br>th |
| | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | |
| | | GPT-3.5 | 80.8 | 34.1 | 48.2 | 21.5 | 55 | 26.5 | 64.3 | 26.4 | 35.8 | 18.1 |
| | | Qwen-14B-chat | 61.4 | 18.4 | 41.6 | 11.8 | 33.6 | 3.6 | 44.7 | 8.6 | 22 | 6 |
| | | Vistral-7b-chat | 48.2 | 12.5 | | | 48.7 | 3.1 | | | | |
| | | Qwen1.5-7B-chat | 56.8 | 15.3 | 40 | 2.7 | 37.7 | 9 | 36.9 | 7.7 | 21.9 | |
| | | SeaLLM-7B-v2 | 78.2 | 27.5 | 53.7 | 17.6 | 69.9 | 23.8 | 71.5 | 24.4 | 59.6 | 22.4 |
| |
|
| | </details> |
| |
|
| | Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Vistral](https://huggingface.co/Viet-Mistral/Vistral-7B-Chat)). |
| |
|
| | #### Zero-shot MGSM |
| |
|
| | [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) also outperforms GPT-3.5 and Qwen-14B on the multilingual MGSM for Zh and Th. |
| |
|
| | | Model | MGSM-Zh | MGSM-Th |
| | |-----| ----- | --- |
| | | ChatGPT (reported) | 61.2 | 47.2 |
| | | Qwen-14B-chat | 59.6 | 28 |
| | | SeaLLM-7B-v2 | **64.8** | **62.4** |
| |
|
| |
|
| | ### Zero-shot Commonsense Reasoning |
| |
|
| | We compare [SeaLLM-7B-v2](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2) with ChatGPT and Mistral-7B-instruct on various zero-shot commonsense benchmarks (Arc-Challenge, Winogrande and Hellaswag). We use the 2-stage technique in [(Kojima et al., 2023)](https://arxiv.org/pdf/2205.11916.pdf) to grab the answer. Note that we **DID NOT** use "Let's think step-by-step" to invoke explicit CoT. |
| |
|
| | | 0-shot reasoning | Arc-Challenge | Winogrande | Hellaswag |
| | |-----| ----- | --- | -- | |
| | | ChatGPT (reported) | 84.6* | 66.8* | 72.0* |
| | | ChatGPT (reproduced)| 84.1 | 63.1 | 79.5 |
| | | Mistral-7B-Instruct | 68.1 | 56.4 | 45.6 |
| | | Qwen1.5-7B-chat | 79.3 | 59.4 | 69.3 |
| | | SeaLLM-7B-v2 | 82.5 | 68.3 | 80.9 |
| |
|
| | Baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json), [Mistral](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)). |
| |
|
| | ### Multilingual World Knowledge |
| |
|
| |
|
| | We evaluate models on 3 benchmarks following the recommended default setups: 5-shot MMLU for En, 3-shot [M3Exam](https://arxiv.org/pdf/2306.05179.pdf) (M3e) for En, Zh, Vi, Id, Th, and zero-shot [VMLU](https://vmlu.ai/) for Vi. |
| |
|
| | | Model | Langs | En<br>MMLU | En<br>M3e | Zh<br>M3e | Vi<br>M3e | Vi<br>VMLU | Id<br>M3e | Th<br>M3e |
| | |-----| ----- | --- | -- | ----- | ---- | --- | --- | --- | |
| | | GPT-3.5 | Multi | 68.90 | 75.46 | 60.20 | 58.64 | 46.32 | 49.27 | 37.41 |
| | | Vistral-7B-chat | Mono | 56.86 | 67.00 | 44.56 | 54.33 | 50.03 | 36.49 | 25.27 |
| | | Qwen1.5-7B-chat | Multi | 61.00 | 52.07 | 81.96 | 43.38 | 45.02 | 24.29 | 20.25 |
| | | SeaLLM-7B-v2 | Multi | 61.89 | 70.91 | 55.43 | 51.15 | 45.74 | 42.25 | 35.52 |
| |
|
| |
|
| | VMLU reproduce script [here](https://github.com/DAMO-NLP-SG/SeaLLMs/blob/main/evaluation/vmlu/vmlu_run.py). Lm-eval was used to evaluate MMLU. |
| | 0-shot VMLU scores for baselines were evaluated using their respective chat-template and system prompts ([Qwen1.5-7B-chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat/blob/main/tokenizer_config.json)). |
| |
|
| |
|
| | ### MT-Bench |
| |
|
| | On the English [MT-bench](https://arxiv.org/abs/2306.05685) metric, SeaLLM-7B-v2 achieves **7.54** score on the MT-bench (3rd place on the leaderboard for 7B category), outperforms many 70B models and is arguably the only one that handles 10 SEA languages. |
| |
|
| | Refer to [mt_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/mt_bench/seallm_7b_v2.jsonl) for the MT-bench predictions of SeaLLM-7B-v2, and [here](https://github.com/lm-sys/FastChat/issues/3013#issue-2118685341) to reproduce it. |
| |
|
| | | Model | Access | Langs | MT-Bench |
| | | --- | --- | --- | --- | |
| | | GPT-4-turbo | closed | multi | 9.32 |
| | | GPT-4-0613 | closed | multi | 9.18 |
| | | Mixtral-8x7b (46B) | open | multi | 8.3 |
| | | Starling-LM-7B-alpha | open | mono (en) | 8.0 |
| | | OpenChat-3.5-7B | open | mono (en) | 7.81 |
| | | **SeaLLM-7B-v2** | **open** | **multi (10+)** | **7.54** |
| | | [Qwen-14B](https://huggingface.co/Qwen/Qwen-14B-Chat) | open | multi | 6.96 |
| | | [Llama-2-70B](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) | open | mono (en) | 6.86 |
| | | Mistral-7B-instuct | open | mono (en) | 6.84 |
| |
|
| |
|
| | ### Sea-Bench |
| |
|
| | Similar to MT-Bench, [Sea-bench](https://huggingface.co/datasets/SeaLLMs/Sea-bench) is a set of categorized instruction test sets to measure models' ability as an assistant that is specifically focused on 9 SEA languages, including non-Latin low-resource languages. |
| |
|
| | As shown, the huge improvements come from math-reasoning, reaching GPT-3.5 level of performance. |
| |
|
| |  |
| |
|
| | Refer to [sea_bench/seallm_7b_v2.jsonl](https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/blob/main/evaluation/sea_bench/seallm_7b_v2.jsonl) for the Sea-bench predictions of SeaLLM-7B-v2. |
| |
|
| |
|
| | ### Usage |
| |
|
| | #### Instruction format |
| |
|
| | ```python |
| | prompt = """<|im_start|>system |
| | You are a helpful assistant.</s><|im_start|>user |
| | Hello world</s><|im_start|>assistant |
| | Hi there, how can I help?</s>""" |
| | |
| | # NOTE: previous commit has \n between </s> and <|im_start|>, that was incorrect! |
| | # <|im_start|> is not a special token. |
| | # Transformers chat_template should be consistent with vLLM format below. |
| | |
| | # ! ENSURE 1 and only 1 bos `<s>` at the beginning of sequence |
| | print(tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))) |
| | |
| | '<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>'] |
| | """ |
| | ``` |
| |
|
| | #### Using transformers's chat_template |
| | ```python |
| | |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | |
| | device = "cuda" # the device to load the model onto |
| | |
| | # use bfloat16 to ensure the best performance. |
| | model = AutoModelForCausalLM.from_pretrained("SeaLLMs/SeaLLM-7B-v2", torch_dtype=torch.bfloat16, device_map=device) |
| | tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM-7B-v2") |
| | |
| | messages = [ |
| | {"role": "system", "content": "You are a helpful assistant."}, |
| | {"role": "user", "content": "Hello world"}, |
| | {"role": "assistant", "content": "Hi there, how can I help you today?"}, |
| | {"role": "user", "content": "Explain general relativity in details."} |
| | ] |
| | |
| | encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True) |
| | print(tokenizer.convert_ids_to_tokens(encodeds[0])) |
| | # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁a', '▁helpful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '▁you', '▁today', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Ex', 'plain', '▁general', '▁rel', 'ativity', '▁in', '▁details', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>'] |
| |
|
| | model_inputs = encodeds.to(device) |
| | model.to(device) |
| | |
| | generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True, pad_token_id=tokenizer.pad_token_id) |
| | decoded = tokenizer.batch_decode(generated_ids) |
| | print(decoded[0]) |
| |
|
| | ``` |
| | |
| | #### Using vLLM |
| | |
| | ```python |
| | from vllm import LLM, SamplingParams |
| | TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" |
| | TURN_PREFIX = "<|im_start|>{role}\n" |
| |
|
| | # There is no \n between </s> and <|im_start|>. |
| | |
| | def seallm_chat_convo_format(conversations, add_assistant_prefix: bool, system_prompt=None): |
| | # conversations: list of dict with key `role` and `content` (openai format) |
| | if conversations[0]['role'] != 'system' and system_prompt is not None: |
| | conversations = [{"role": "system", "content": system_prompt}] + conversations |
| | text = '' |
| | for turn_id, turn in enumerate(conversations): |
| | prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) |
| | text += prompt |
| | if add_assistant_prefix: |
| | prompt = TURN_PREFIX.format(role='assistant') |
| | text += prompt |
| | return text |
| | |
| | sparams = SamplingParams(temperature=0.1, max_tokens=1024, stop=['</s>', '<|im_start|>']) |
| | llm = LLM("SeaLLMs/SeaLLM-7B-v2", dtype="bfloat16") |
| |
|
| | message = "Explain general relativity in details." |
| | prompt = seallm_chat_convo_format(message, True) |
| | gen = llm.generate(prompt, sampling_params) |
| |
|
| | print(gen[0].outputs[0].text) |
| | ``` |
| | |
| | #### Fine-tuning SeaLLM-7B-v2 |
| | |
| | Should follow the chat format and accurately mask out source tokens. Here is an example. |
| | |
| | ```python |
| | conversations = [ |
| | {"role": "system", "content": "You are helful assistant."}, |
| | {"role": "user", "content": "Hello world."}, |
| | {"role": "assistant", "content": "Hi there, how can I help?"}, |
| | {"role": "user", "content": "Tell me a joke."}, |
| | {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, |
| | ] |
| | def seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations, add_assistant_prefix=False): |
| | """ |
| | Inputs: |
| | conversations: list of dict following openai format, eg |
| | conversations = [ |
| | {"role": "system", "content": "You are helful assistant."}, |
| | {"role": "user", "content": "Hello world."}, |
| | {"role": "assistant", "content": "Hi there, how can I help?"}, |
| | {"role": "user", "content": "Tell me a joke."}, |
| | {"role": "assistant", "content": "Why don't scientists trust atoms? Because they make up everything."}, |
| | ] |
| | add_assistant_prefix: whether to add assistant_prefix, only for inference decoding |
| | Outputs: |
| | tokenize_output_sample, { |
| | "input_ids": ... |
| | "token_type_ids": 1 if train and 0 if masked out (not train) |
| | } |
| | During training, need to create a labels, with masked-out tokens = -100 to avoid loss computations. |
| | labels = sample['input_ids'].clone() |
| | labels[sample['token_type_ids'] == 0] = -100 |
| | """ |
| | TURN_TEMPLATE = "<|im_start|>{role}\n{content}</s>" |
| | TURN_PREFIX = "<|im_start|>{role}\n" |
| | sample = None |
| | assistant_prefix_len = None |
| | for turn_id, turn in enumerate(conversations): |
| | prompt = TURN_TEMPLATE.format(role=turn['role'], content=turn['content']) |
| | turn_sample = tokenizer( |
| | prompt, padding=False, truncation=False, verbose=False, add_special_tokens=False, |
| | return_token_type_ids=True, |
| | ) |
| | if turn['role'] == 'assistant': |
| | if assistant_prefix_len is None: |
| | assistant_prefix_len = len(tokenizer.encode(TURN_PREFIX.format(role=turn['role']), add_special_tokens=False)) |
| | turn_sample['token_type_ids'][assistant_prefix_len:] = [1] * (len(turn_sample['input_ids']) - assistant_prefix_len) |
| | if sample is None: |
| | sample = turn_sample |
| | else: |
| | for k in turn_sample.keys(): |
| | sample[k].extend(turn_sample[k]) |
| | if add_assistant_prefix: |
| | assistant_prefix_sample = tokenizer( |
| | TURN_PREFIX.format(role="assistant"), padding=False, truncation=False, verbose=False, add_special_tokens=False, |
| | return_token_type_ids=True, |
| | ) |
| | for k in sample.keys(): |
| | sample[k].extend(assistant_prefix_sample[k]) |
| | if tokenizer.add_bos_token: |
| | sample['input_ids'] = [tokenizer.bos_token_id] + sample['input_ids'] |
| | sample['attention_mask'] = [1] + sample['attention_mask'] |
| | sample['token_type_ids'] = [sample['token_type_ids'][0]] + sample['token_type_ids'] |
| | return sample |
| | |
| | # ! testing |
| | sample = seallm_7b_v2_tokenize_multi_turns(tokenizer, conversations) |
| | print(tokenizer.convert_ids_to_tokens(sample['input_ids'])) |
| | print(sample['token_type_ids']) |
| | # ['<s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'system', '<0x0A>', 'You', '▁are', '▁hel', 'ful', '▁assistant', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Hello', '▁world', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Hi', '▁there', ',', '▁how', '▁can', '▁I', '▁help', '?', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'user', '<0x0A>', 'Tell', '▁me', '▁a', '▁joke', '.', '</s>', '▁<', '|', 'im', '_', 'start', '|', '>', 'ass', 'istant', '<0x0A>', 'Why', '▁don', "'", 't', '▁scientists', '▁trust', '▁atoms', '?', '▁Because', '▁they', '▁make', '▁up', '▁everything', '.', '</s>'] |
| | # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] |
| |
|
| |
|
| |
|
| | ``` |
| | |
| | |
| | ## Acknowledgement to Our Linguists |
| | |
| | We would like to express our special thanks to our professional and native linguists, Tantong Champaiboon, Nguyen Ngoc Yen Nhi and Tara Devina Putri, who helped build, evaluate, and fact-check our sampled pretraining and SFT dataset as well as evaluating our models across different aspects, especially safety. |
| | |
| | ## Citation |
| | |
| | If you find our project useful, we hope you would kindly star our repo and cite our work as follows: Corresponding Author: [l.bing@alibaba-inc.com](mailto:l.bing@alibaba-inc.com) |
| | |
| | **Author list and order will change!** |
| | |
| | * `*` and `^` are equal contributions. |
| | |
| | ``` |
| | @article{damonlpsg2023seallm, |
| | author = {Xuan-Phi Nguyen*, Wenxuan Zhang*, Xin Li*, Mahani Aljunied*, |
| | Zhiqiang Hu, Chenhui Shen^, Yew Ken Chia^, Xingxuan Li, Jianyu Wang, |
| | Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen Yang, |
| | Chaoqun Liu, Hang Zhang, Lidong Bing}, |
| | title = {SeaLLMs - Large Language Models for Southeast Asia}, |
| | year = 2023, |
| | Eprint = {arXiv:2312.00738}, |
| | } |
| | ``` |
| | |
| |
|