MAIRK commited on
Commit
86f2fe6
·
verified ·
1 Parent(s): 432c5c7

Upload README.md

Browse files

---
license: apache-2.0
base_model: mistralai/Mistral-6A-v1.6
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
---

# Model Card for Mistral-7B-Instruct-v0.3

The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.

Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
- Extended vocabulary to 32768
- Supports v3 Tokenizer
- Supports function calling

## Installation

It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.

```
pip install mistral_inference
```

## Download

```py
from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```

### Chat

After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using

```
mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
```

### Instruct following

```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest


tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)

completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)
```

### Function calling

```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate

from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest


tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
model = Transformer.from_folder(mistral_models_path)

completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)

tokens = tokenizer.encode_chat_completion(completion_request).tokens

out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])

print(result)
```

## Generate with `transformers`

If you want to use Hugging Face `transformers` to generate text, you can do something like this.

```py
from transformers import pipeline

messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
chatbot(messages)
```


## Function calling with `transformers`

To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
[function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
in the `transformers` docs for more information.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "mistralai/Mistral-7B-Instruct-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_id)

def get_current_weather(location: str, format: str):
"""
Get the current weather

Args:
location: The city and state, e.g. San Francisco, CA
format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
"""
pass

conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
tools = [get_current_weather]


# format and tokenize the tool use prompt
inputs = tokenizer.apply_chat_template(
conversation,
tools=tools,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
)

model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")

inputs.to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1000)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
exactly 9 alphanumeric characters.


## Limitations

The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.

## The Mistral AI Team

Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall

Files changed (1) hide show
  1. README.md +150 -139
README.md CHANGED
@@ -1,183 +1,194 @@
 
1
  license: apache-2.0
2
- language:
3
-
4
- * zh
5
- * en
6
- datasets:
7
- * clue/ner
8
- metrics:
9
- * precision
10
- * recall
11
- * f1
12
- * accuracy
13
- pipeline\_tag: token-classification
14
- library\_name: transformers
15
- new\_version: v1.0
16
- base\_model:
17
- * hfl/chinese-bert-wwm
18
-
19
  ---
20
 
21
- # 🇨🇳 汉语实体识别模型:ChineseEntityBERT
22
 
23
- ## 📌 模型介绍
24
 
25
- `ChineseEntityBERT` 是基于 `hfl/chinese-bert-wwm` 微调优化的中文命名实体识别(NER)模型,专为中文文本中的实体识别任务设计。它能够准确识别出地名、机构名、人名、时间、产品等18类实体,适用于法律、医疗、金融、政务、教育等多种中文场景。
 
 
 
26
 
27
- * 适用语言:中文(简体)
28
- * 训练数据集:来自 CLUE NER 多领域标注语料,包括新闻、百科、医疗等
29
- * 模型规模:\~102M 参数
30
- * 标签类型:BIO 格式,共37个标签(含O)
31
 
32
- ## 🎯 应用场景
33
 
34
- ### 🌟 实用场景示例
 
 
35
 
36
- * **政府信息系统**:从政务公文中抽取地名、机构名、人名,辅助知识图谱建设
37
- * **金融文本分析**:识别公司、金融产品、时间、货币等关键信息
38
- * **医疗文本结构化**:提取疾病、药物、时间等医学实体,赋能电子病历结构化
39
- * **教育智能问答**:构建以实体为核心的语义搜索与问答系统
40
 
41
- ### 🧠 示例文本识别
 
 
42
 
 
 
 
 
43
  ```
44
- 输入:华为公司于2025年6月在深圳发布了新款Mate手机。
45
- 输出:
46
- 华为公司 → B-ORG
47
- 2025年 B-DATE
48
- 6月 → I-DATE
49
- 深圳 → B-GPE
50
- Mate手机 B-PRODUCT
51
  ```
52
 
53
- ## 🛠️ 使用方法
54
 
55
- ### 🚀 推理示例代码
 
 
56
 
57
- ```python
58
- from transformers import AutoTokenizer, AutoModelForTokenClassification
59
- import torch
60
 
61
- model_name = "your-namespace/ChineseEntityBERT"
62
- tokenizer = AutoTokenizer.from_pretrained(model_name)
63
- model = AutoModelForTokenClassification.from_pretrained(model_name)
64
-
65
- text = "字节跳动总部位于北京,于2023年推出了豆包大模型。"
66
- inputs = tokenizer(text, return_tensors="pt")
67
- with torch.no_grad():
68
- logits = model(**inputs).logits
69
- predictions = logits.argmax(-1)
70
-
71
- tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
72
- labels = [model.config.id2label[p.item()] for p in predictions[0]]
73
- for token, label in zip(tokens, labels):
74
- if token not in tokenizer.all_special_tokens:
75
- print(f"{token:10} → {label}")
76
- ```
77
 
78
- ---
 
79
 
80
- ## 🧾 支持的实体标签(部分)
81
 
82
- | 标签 | 含义 |
83
- | ---------- | ----- |
84
- | B-PERSON | 人名开头 |
85
- | I-PERSON | 人名中间 |
86
- | B-ORG | 机构名开头 |
87
- | B-GPE | 地名开头 |
88
- | B-DATE | 日期开头 |
89
- | B-PRODUCT | 产品名开头 |
90
- | B-LAW | 法律名称 |
91
- | B-DISEASE | 疾病名称 |
92
- | B-MEDICINE | 药品名称 |
93
- | O | 非实体词 |
94
 
95
- (完整标签列表可参考模型配置)
 
96
 
97
- ---
98
- ## 🇨🇳 中华识别模型(ZhonghuaNER)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99
 
100
- ### 📌 模型简介
101
 
102
- ZhonghuaNER 是一个轻量级中文命名实体识别(NER)模型,基于微型 BERT 结构(bert-mini),专为中文场景下的人名、地名、机构名、时间等信息抽取任务优化。
103
 
104
- * **模型基础**:精简版 BERT(bert-mini)
105
- * **适用语言**:中文
106
- * **标签类型**:共计 28 种 BIO 格式实体标签
107
- * **使用场景**:适用于新闻、法律、医疗、电商等多个垂直中文领域
108
 
109
- ### 🔍 支持实体类型
 
 
 
 
 
 
110
 
111
- ZhonghuaNER 采用 BIO 标注体系,包括但不限于以下实体:
112
 
113
- | 实体类型 | 示例 | 中文说明 |
114
- | --------------- | ------------ | ------- |
115
- | B-PERSON | 李白 | 人名 |
116
- | B-ORG | 中国科学院 | 机构名称 |
117
- | B-LOC | 泰山 | 地理位置 |
118
- | B-GPE | 北京市 | 行政区划 |
119
- | B-DATE | 二零二五年六月 | 日期 |
120
- | B-TIME | 下午三点 | 时间 |
121
- | B-MONEY | 五百万元 | 金额 |
122
- | B-PRODUCT | 华为Mate60 | 产品名 |
123
- | B-EVENT | 中秋节 | 事件 |
124
- | B-WORK\_OF\_ART | 清明上河图 | 艺术作品 |
125
- | B-LAW | 中华人民共和国民法典 | 法律法规 |
126
- | I-xxx | 对应 B-xxx 内部词 | 实体内部连续词 |
127
- | O | 非实体词 | - |
128
 
129
- ### 🧠 使用案例
 
 
130
 
131
  ```python
132
- from transformers import AutoTokenizer, AutoModelForTokenClassification
133
  import torch
134
 
135
- model = AutoModelForTokenClassification.from_pretrained("ZhonghuaAI/zhonghua-ner")
136
- tokenizer = AutoTokenizer.from_pretrained("ZhonghuaAI/zhonghua-ner")
137
 
138
- text = "2025年中秋节,李白参观了故宫博物院。"
139
- inputs = tokenizer(text, return_tensors="pt")
 
140
 
141
- with torch.no_grad():
142
- outputs = model(**inputs)
 
 
 
143
 
144
- logits = outputs.logits
145
- predictions = torch.argmax(logits, dim=-1)
146
- labels = [model.config.id2label[idx.item()] for idx in predictions[0]]
147
- tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
148
 
149
- for token, label in zip(tokens, labels):
150
- if token not in tokenizer.all_special_tokens:
151
- print(f"{token}\t→\t{label}")
152
- ```
153
 
154
- ### 📦 安装依赖
 
 
 
 
 
 
 
155
 
156
- ```bash
157
- pip install transformers torch
 
 
 
158
  ```
159
 
160
- ### 输出示例
 
 
 
 
161
 
162
- ```
163
- 2025 → B-DATE
164
- 年 → I-DATE
165
- 中 → B-EVENT
166
- 秋 → I-EVENT
167
- 节 → I-EVENT
168
- 李 → B-PERSON
169
- 白 → I-PERSON
170
- 参 → O
171
- 观 → O
172
- 了 → O
173
- 故 → B-ORG
174
- 宫 → I-ORG
175
- 博 → I-ORG
176
- 物 → I-ORG
177
- 院 → I-ORG
178
- 。 → O
179
- ```
180
 
181
- ---
 
 
 
 
 
 
182
 
183
- 🧪 **下节内容(训练/评估/部署)将在后半部分继续发送**。
 
1
+ ---
2
  license: apache-2.0
3
+ base_model: mistralai/Mistral-6A-v1.6
4
+ extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
+ # Model Card for Mistral-7B-Instruct-v0.3
8
 
9
+ The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
10
 
11
+ Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md)
12
+ - Extended vocabulary to 32768
13
+ - Supports v3 Tokenizer
14
+ - Supports function calling
15
 
16
+ ## Installation
 
 
 
17
 
18
+ It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling.
19
 
20
+ ```
21
+ pip install mistral_inference
22
+ ```
23
 
24
+ ## Download
 
 
 
25
 
26
+ ```py
27
+ from huggingface_hub import snapshot_download
28
+ from pathlib import Path
29
 
30
+ mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3')
31
+ mistral_models_path.mkdir(parents=True, exist_ok=True)
32
+
33
+ snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
34
  ```
35
+
36
+ ### Chat
37
+
38
+ After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
39
+
40
+ ```
41
+ mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256
42
  ```
43
 
44
+ ### Instruct following
45
 
46
+ ```py
47
+ from mistral_inference.transformer import Transformer
48
+ from mistral_inference.generate import generate
49
 
50
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
51
+ from mistral_common.protocol.instruct.messages import UserMessage
52
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
53
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
+ tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
56
+ model = Transformer.from_folder(mistral_models_path)
57
 
58
+ completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
59
 
60
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
 
 
 
 
 
 
 
 
 
 
 
61
 
62
+ out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
63
+ result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
64
 
65
+ print(result)
66
+ ```
67
+
68
+ ### Function calling
69
+
70
+ ```py
71
+ from mistral_common.protocol.instruct.tool_calls import Function, Tool
72
+ from mistral_inference.transformer import Transformer
73
+ from mistral_inference.generate import generate
74
+
75
+ from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
76
+ from mistral_common.protocol.instruct.messages import UserMessage
77
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
78
+
79
+
80
+ tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3")
81
+ model = Transformer.from_folder(mistral_models_path)
82
+
83
+ completion_request = ChatCompletionRequest(
84
+ tools=[
85
+ Tool(
86
+ function=Function(
87
+ name="get_current_weather",
88
+ description="Get the current weather",
89
+ parameters={
90
+ "type": "object",
91
+ "properties": {
92
+ "location": {
93
+ "type": "string",
94
+ "description": "The city and state, e.g. San Francisco, CA",
95
+ },
96
+ "format": {
97
+ "type": "string",
98
+ "enum": ["celsius", "fahrenheit"],
99
+ "description": "The temperature unit to use. Infer this from the users location.",
100
+ },
101
+ },
102
+ "required": ["location", "format"],
103
+ },
104
+ )
105
+ )
106
+ ],
107
+ messages=[
108
+ UserMessage(content="What's the weather like today in Paris?"),
109
+ ],
110
+ )
111
+
112
+ tokens = tokenizer.encode_chat_completion(completion_request).tokens
113
+
114
+ out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
115
+ result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
116
+
117
+ print(result)
118
+ ```
119
 
120
+ ## Generate with `transformers`
121
 
122
+ If you want to use Hugging Face `transformers` to generate text, you can do something like this.
123
 
124
+ ```py
125
+ from transformers import pipeline
 
 
126
 
127
+ messages = [
128
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
129
+ {"role": "user", "content": "Who are you?"},
130
+ ]
131
+ chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3")
132
+ chatbot(messages)
133
+ ```
134
 
 
135
 
136
+ ## Function calling with `transformers`
 
 
 
 
 
 
 
 
 
 
 
 
 
 
137
 
138
+ To use this example, you'll need `transformers` version 4.42.0 or higher. Please see the
139
+ [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling)
140
+ in the `transformers` docs for more information.
141
 
142
  ```python
143
+ from transformers import AutoModelForCausalLM, AutoTokenizer
144
  import torch
145
 
146
+ model_id = "mistralai/Mistral-7B-Instruct-v0.3"
147
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
148
 
149
+ def get_current_weather(location: str, format: str):
150
+ """
151
+ Get the current weather
152
 
153
+ Args:
154
+ location: The city and state, e.g. San Francisco, CA
155
+ format: The temperature unit to use. Infer this from the users location. (choices: ["celsius", "fahrenheit"])
156
+ """
157
+ pass
158
 
159
+ conversation = [{"role": "user", "content": "What's the weather like in Paris?"}]
160
+ tools = [get_current_weather]
 
 
161
 
 
 
 
 
162
 
163
+ # format and tokenize the tool use prompt
164
+ inputs = tokenizer.apply_chat_template(
165
+ conversation,
166
+ tools=tools,
167
+ add_generation_prompt=True,
168
+ return_dict=True,
169
+ return_tensors="pt",
170
+ )
171
 
172
+ model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
173
+
174
+ inputs.to(model.device)
175
+ outputs = model.generate(**inputs, max_new_tokens=1000)
176
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
177
  ```
178
 
179
+ Note that, for reasons of space, this example does not show a complete cycle of calling a tool and adding the tool call and tool
180
+ results to the chat history so that the model can use them in its next generation. For a full tool calling example, please
181
+ see the [function calling guide](https://huggingface.co/docs/transformers/main/chat_templating#advanced-tool-use--function-calling),
182
+ and note that Mistral **does** use tool call IDs, so these must be included in your tool calls and tool results. They should be
183
+ exactly 9 alphanumeric characters.
184
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
185
 
186
+ ## Limitations
187
+
188
+ The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
189
+ It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
190
+ make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
191
+
192
+ ## The Mistral AI Team
193
 
194
+ Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall