epappas commited on
Commit
7a23acd
·
verified ·
1 Parent(s): 4645d8e

Upload model artifacts

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +32 -0
  2. README.md +72 -0
  3. adapter_config.json +46 -0
  4. adapter_model.safetensors +3 -0
  5. chat_template.jinja +121 -0
  6. checkpoint-100/README.md +209 -0
  7. checkpoint-100/adapter_config.json +46 -0
  8. checkpoint-100/adapter_model.safetensors +3 -0
  9. checkpoint-100/chat_template.jinja +121 -0
  10. checkpoint-100/optimizer.pt +3 -0
  11. checkpoint-100/rng_state.pth +3 -0
  12. checkpoint-100/scheduler.pt +3 -0
  13. checkpoint-100/tokenizer.json +3 -0
  14. checkpoint-100/tokenizer_config.json +15 -0
  15. checkpoint-100/trainer_state.json +331 -0
  16. checkpoint-100/training_args.bin +3 -0
  17. checkpoint-1000/README.md +209 -0
  18. checkpoint-1000/adapter_config.json +46 -0
  19. checkpoint-1000/adapter_model.safetensors +3 -0
  20. checkpoint-1000/chat_template.jinja +121 -0
  21. checkpoint-1000/optimizer.pt +3 -0
  22. checkpoint-1000/rng_state.pth +3 -0
  23. checkpoint-1000/scheduler.pt +3 -0
  24. checkpoint-1000/tokenizer.json +3 -0
  25. checkpoint-1000/tokenizer_config.json +15 -0
  26. checkpoint-1000/trainer_state.json +0 -0
  27. checkpoint-1000/training_args.bin +3 -0
  28. checkpoint-1100/README.md +209 -0
  29. checkpoint-1100/adapter_config.json +46 -0
  30. checkpoint-1100/adapter_model.safetensors +3 -0
  31. checkpoint-1100/chat_template.jinja +121 -0
  32. checkpoint-1100/optimizer.pt +3 -0
  33. checkpoint-1100/rng_state.pth +3 -0
  34. checkpoint-1100/scheduler.pt +3 -0
  35. checkpoint-1100/tokenizer.json +3 -0
  36. checkpoint-1100/tokenizer_config.json +15 -0
  37. checkpoint-1100/trainer_state.json +0 -0
  38. checkpoint-1100/training_args.bin +3 -0
  39. checkpoint-1200/README.md +209 -0
  40. checkpoint-1200/adapter_config.json +46 -0
  41. checkpoint-1200/adapter_model.safetensors +3 -0
  42. checkpoint-1200/chat_template.jinja +121 -0
  43. checkpoint-1200/optimizer.pt +3 -0
  44. checkpoint-1200/rng_state.pth +3 -0
  45. checkpoint-1200/scheduler.pt +3 -0
  46. checkpoint-1200/tokenizer.json +3 -0
  47. checkpoint-1200/tokenizer_config.json +15 -0
  48. checkpoint-1200/trainer_state.json +0 -0
  49. checkpoint-1200/training_args.bin +3 -0
  50. checkpoint-1300/README.md +209 -0
.gitattributes CHANGED
@@ -33,3 +33,35 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ checkpoint-100/tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ checkpoint-1000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
38
+ checkpoint-1100/tokenizer.json filter=lfs diff=lfs merge=lfs -text
39
+ checkpoint-1200/tokenizer.json filter=lfs diff=lfs merge=lfs -text
40
+ checkpoint-1300/tokenizer.json filter=lfs diff=lfs merge=lfs -text
41
+ checkpoint-1400/tokenizer.json filter=lfs diff=lfs merge=lfs -text
42
+ checkpoint-1500/tokenizer.json filter=lfs diff=lfs merge=lfs -text
43
+ checkpoint-1600/tokenizer.json filter=lfs diff=lfs merge=lfs -text
44
+ checkpoint-1700/tokenizer.json filter=lfs diff=lfs merge=lfs -text
45
+ checkpoint-1800/tokenizer.json filter=lfs diff=lfs merge=lfs -text
46
+ checkpoint-1900/tokenizer.json filter=lfs diff=lfs merge=lfs -text
47
+ checkpoint-200/tokenizer.json filter=lfs diff=lfs merge=lfs -text
48
+ checkpoint-2000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
49
+ checkpoint-2100/tokenizer.json filter=lfs diff=lfs merge=lfs -text
50
+ checkpoint-2200/tokenizer.json filter=lfs diff=lfs merge=lfs -text
51
+ checkpoint-2300/tokenizer.json filter=lfs diff=lfs merge=lfs -text
52
+ checkpoint-2400/tokenizer.json filter=lfs diff=lfs merge=lfs -text
53
+ checkpoint-2500/tokenizer.json filter=lfs diff=lfs merge=lfs -text
54
+ checkpoint-2600/tokenizer.json filter=lfs diff=lfs merge=lfs -text
55
+ checkpoint-2700/tokenizer.json filter=lfs diff=lfs merge=lfs -text
56
+ checkpoint-2800/tokenizer.json filter=lfs diff=lfs merge=lfs -text
57
+ checkpoint-2900/tokenizer.json filter=lfs diff=lfs merge=lfs -text
58
+ checkpoint-300/tokenizer.json filter=lfs diff=lfs merge=lfs -text
59
+ checkpoint-3000/tokenizer.json filter=lfs diff=lfs merge=lfs -text
60
+ checkpoint-3039/tokenizer.json filter=lfs diff=lfs merge=lfs -text
61
+ checkpoint-400/tokenizer.json filter=lfs diff=lfs merge=lfs -text
62
+ checkpoint-500/tokenizer.json filter=lfs diff=lfs merge=lfs -text
63
+ checkpoint-600/tokenizer.json filter=lfs diff=lfs merge=lfs -text
64
+ checkpoint-700/tokenizer.json filter=lfs diff=lfs merge=lfs -text
65
+ checkpoint-800/tokenizer.json filter=lfs diff=lfs merge=lfs -text
66
+ checkpoint-900/tokenizer.json filter=lfs diff=lfs merge=lfs -text
67
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Ministral-3-3B-Instruct-2512-BF16
3
+ library_name: peft
4
+ model_name: mistral-grpo
5
+ tags:
6
+ - base_model:adapter:mistralai/Ministral-3-3B-Instruct-2512-BF16
7
+ - grpo
8
+ - lora
9
+ - transformers
10
+ - trl
11
+ licence: license
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ # Model Card for mistral-grpo
16
+
17
+ This model is a fine-tuned version of [mistralai/Ministral-3-3B-Instruct-2512-BF16](https://huggingface.co/mistralai/Ministral-3-3B-Instruct-2512-BF16).
18
+ It has been trained using [TRL](https://github.com/huggingface/trl).
19
+
20
+ ## Quick start
21
+
22
+ ```python
23
+ from transformers import pipeline
24
+
25
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
26
+ generator = pipeline("text-generation", model="None", device="cuda")
27
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
28
+ print(output["generated_text"])
29
+ ```
30
+
31
+ ## Training procedure
32
+
33
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/evalonlabs/mistral-rl/runs/cex6rpwh)
34
+
35
+
36
+
37
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
38
+
39
+ ### Framework versions
40
+
41
+ - PEFT 0.18.1
42
+ - TRL: 0.29.0
43
+ - Transformers: 5.2.0
44
+ - Pytorch: 2.6.0+cu124
45
+ - Datasets: 4.6.1
46
+ - Tokenizers: 0.22.2
47
+
48
+ ## Citations
49
+
50
+ Cite GRPO as:
51
+
52
+ ```bibtex
53
+ @article{shao2024deepseekmath,
54
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
55
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
56
+ year = 2024,
57
+ eprint = {arXiv:2402.03300},
58
+ }
59
+
60
+ ```
61
+
62
+ Cite TRL as:
63
+
64
+ ```bibtex
65
+ @software{vonwerra2020trl,
66
+ title = {{TRL: Transformers Reinforcement Learning}},
67
+ author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
68
+ license = {Apache-2.0},
69
+ url = {https://github.com/huggingface/trl},
70
+ year = {2020}
71
+ }
72
+ ```
adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mistralai/Ministral-3-3B-Instruct-2512-BF16",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 64,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 32,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "o_proj",
33
+ "gate_proj",
34
+ "q_proj",
35
+ "up_proj",
36
+ "v_proj",
37
+ "k_proj",
38
+ "down_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:88f0bd67667e5674f85ccd00ae05443452dc3ca8f636ad926403244c639054e8
3
+ size 270117632
chat_template.jinja ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {#- Default system message if no system prompt is passed. #}
2
+ {%- set default_system_message = 'You are Ministral-3-3B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is {today}.\n\nWhen you\'re not sure about some information or when the user\'s request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don\'t have the information and avoid making up anything.\nIf the user\'s question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").\nYou are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.\nYou follow these instructions in all languages, and always respond to the user in the language they use or request.\nNext sections describe the capabilities that you have.\n\n# WEB BROWSING INSTRUCTIONS\n\nYou cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.\n\n# MULTI-MODAL INSTRUCTIONS\n\nYou have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.\nYou cannot read nor transcribe audio files or videos.\n\n# TOOL CALLING INSTRUCTIONS\n\nYou may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:\n\n1. When the request requires up-to-date information.\n2. When the request requires specific data that you do not have in your knowledge base.\n3. When the request involves actions that you cannot perform without tools.\n\nAlways prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.' %}
3
+
4
+ {#- Begin of sequence token. #}
5
+ {{- bos_token }}
6
+
7
+ {#- Handle system prompt if it exists. #}
8
+ {#- System prompt supports text content or text chunks. #}
9
+ {%- if messages[0]['role'] == 'system' %}
10
+ {{- '[SYSTEM_PROMPT]' -}}
11
+ {%- if messages[0]['content'] is string %}
12
+ {{- messages[0]['content'] -}}
13
+ {%- else %}
14
+ {%- for block in messages[0]['content'] %}
15
+ {%- if block['type'] == 'text' %}
16
+ {{- block['text'] }}
17
+ {%- else %}
18
+ {{- raise_exception('Only text chunks are supported in system message contents.') }}
19
+ {%- endif %}
20
+ {%- endfor %}
21
+ {%- endif %}
22
+ {{- '[/SYSTEM_PROMPT]' -}}
23
+ {%- set loop_messages = messages[1:] %}
24
+ {%- else %}
25
+ {%- set loop_messages = messages %}
26
+ {%- if default_system_message != '' %}
27
+ {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }}
28
+ {%- endif %}
29
+ {%- endif %}
30
+
31
+
32
+ {#- Tools definition #}
33
+ {%- set tools_definition = '' %}
34
+ {%- set has_tools = false %}
35
+ {%- if tools is defined and tools is not none and tools|length > 0 %}
36
+ {%- set has_tools = true %}
37
+ {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %}
38
+ {{- tools_definition }}
39
+ {%- endif %}
40
+
41
+ {#- Checks for alternating user/assistant messages. #}
42
+ {%- set ns = namespace(index=0) %}
43
+ {%- for message in loop_messages %}
44
+ {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %}
45
+ {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %}
46
+ {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }}
47
+ {%- endif %}
48
+ {%- set ns.index = ns.index + 1 %}
49
+ {%- endif %}
50
+ {%- endfor %}
51
+
52
+ {#- Handle conversation messages. #}
53
+ {%- for message in loop_messages %}
54
+
55
+ {#- User messages supports text content or text and image chunks. #}
56
+ {%- if message['role'] == 'user' %}
57
+ {%- if message['content'] is string %}
58
+ {{- '[INST]' + message['content'] + '[/INST]' }}
59
+ {%- elif message['content'] | length > 0 %}
60
+ {{- '[INST]' }}
61
+ {%- if message['content'] | length == 2 %}
62
+ {%- set blocks = message['content'] | sort(attribute='type') %}
63
+ {%- else %}
64
+ {%- set blocks = message['content'] %}
65
+ {%- endif %}
66
+ {%- for block in blocks %}
67
+ {%- if block['type'] == 'text' %}
68
+ {{- block['text'] }}
69
+ {%- elif block['type'] in ['image', 'image_url'] %}
70
+ {{- '[IMG]' }}
71
+ {%- else %}
72
+ {{- raise_exception('Only text, image and image_url chunks are supported in user message content.') }}
73
+ {%- endif %}
74
+ {%- endfor %}
75
+ {{- '[/INST]' }}
76
+ {%- else %}
77
+ {{- raise_exception('User message must have a string or a list of chunks in content') }}
78
+ {%- endif %}
79
+
80
+ {#- Assistant messages supports text content or text and image chunks. #}
81
+ {%- elif message['role'] == 'assistant' %}
82
+ {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %}
83
+ {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }}
84
+ {%- endif %}
85
+
86
+ {%- if message['content'] is string %}
87
+ {{- message['content'] }}
88
+ {%- elif message['content'] | length > 0 %}
89
+ {%- for block in message['content'] %}
90
+ {%- if block['type'] == 'text' %}
91
+ {{- block['text'] }}
92
+ {%- else %}
93
+ {{- raise_exception('Only text chunks are supported in assistant message contents.') }}
94
+ {%- endif %}
95
+ {%- endfor %}
96
+ {%- endif %}
97
+
98
+ {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %}
99
+ {%- for tool in message['tool_calls'] %}
100
+ {%- set arguments = tool['function']['arguments'] %}
101
+ {%- if arguments is not string %}
102
+ {%- set arguments = arguments|tojson|safe %}
103
+ {%- elif arguments == '' %}
104
+ {%- set arguments = '{}' %}
105
+ {%- endif %}
106
+ {{- '[TOOL_CALLS]' + tool['function']['name'] + '[ARGS]' + arguments }}
107
+ {%- endfor %}
108
+ {%- endif %}
109
+
110
+ {#- End of sequence token for each assistant messages. #}
111
+ {{- eos_token }}
112
+
113
+ {#- Tool messages only supports text content. #}
114
+ {%- elif message['role'] == 'tool' %}
115
+ {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }}
116
+
117
+ {#- Raise exception for unsupported roles. #}
118
+ {%- else %}
119
+ {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role']) }}
120
+ {%- endif %}
121
+ {%- endfor %}
checkpoint-100/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Ministral-3-3B-Instruct-2512-BF16
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:mistralai/Ministral-3-3B-Instruct-2512-BF16
7
+ - grpo
8
+ - lora
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.1
checkpoint-100/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mistralai/Ministral-3-3B-Instruct-2512-BF16",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 64,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 32,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "o_proj",
33
+ "gate_proj",
34
+ "q_proj",
35
+ "up_proj",
36
+ "v_proj",
37
+ "k_proj",
38
+ "down_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
checkpoint-100/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d7d4ef4fe1131cc86cda2978d9e057b20db09813912f5c1fceb41ab4abdb851
3
+ size 270117632
checkpoint-100/chat_template.jinja ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {#- Default system message if no system prompt is passed. #}
2
+ {%- set default_system_message = 'You are Ministral-3-3B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is {today}.\n\nWhen you\'re not sure about some information or when the user\'s request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don\'t have the information and avoid making up anything.\nIf the user\'s question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").\nYou are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.\nYou follow these instructions in all languages, and always respond to the user in the language they use or request.\nNext sections describe the capabilities that you have.\n\n# WEB BROWSING INSTRUCTIONS\n\nYou cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.\n\n# MULTI-MODAL INSTRUCTIONS\n\nYou have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.\nYou cannot read nor transcribe audio files or videos.\n\n# TOOL CALLING INSTRUCTIONS\n\nYou may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:\n\n1. When the request requires up-to-date information.\n2. When the request requires specific data that you do not have in your knowledge base.\n3. When the request involves actions that you cannot perform without tools.\n\nAlways prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.' %}
3
+
4
+ {#- Begin of sequence token. #}
5
+ {{- bos_token }}
6
+
7
+ {#- Handle system prompt if it exists. #}
8
+ {#- System prompt supports text content or text chunks. #}
9
+ {%- if messages[0]['role'] == 'system' %}
10
+ {{- '[SYSTEM_PROMPT]' -}}
11
+ {%- if messages[0]['content'] is string %}
12
+ {{- messages[0]['content'] -}}
13
+ {%- else %}
14
+ {%- for block in messages[0]['content'] %}
15
+ {%- if block['type'] == 'text' %}
16
+ {{- block['text'] }}
17
+ {%- else %}
18
+ {{- raise_exception('Only text chunks are supported in system message contents.') }}
19
+ {%- endif %}
20
+ {%- endfor %}
21
+ {%- endif %}
22
+ {{- '[/SYSTEM_PROMPT]' -}}
23
+ {%- set loop_messages = messages[1:] %}
24
+ {%- else %}
25
+ {%- set loop_messages = messages %}
26
+ {%- if default_system_message != '' %}
27
+ {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }}
28
+ {%- endif %}
29
+ {%- endif %}
30
+
31
+
32
+ {#- Tools definition #}
33
+ {%- set tools_definition = '' %}
34
+ {%- set has_tools = false %}
35
+ {%- if tools is defined and tools is not none and tools|length > 0 %}
36
+ {%- set has_tools = true %}
37
+ {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %}
38
+ {{- tools_definition }}
39
+ {%- endif %}
40
+
41
+ {#- Checks for alternating user/assistant messages. #}
42
+ {%- set ns = namespace(index=0) %}
43
+ {%- for message in loop_messages %}
44
+ {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %}
45
+ {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %}
46
+ {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }}
47
+ {%- endif %}
48
+ {%- set ns.index = ns.index + 1 %}
49
+ {%- endif %}
50
+ {%- endfor %}
51
+
52
+ {#- Handle conversation messages. #}
53
+ {%- for message in loop_messages %}
54
+
55
+ {#- User messages supports text content or text and image chunks. #}
56
+ {%- if message['role'] == 'user' %}
57
+ {%- if message['content'] is string %}
58
+ {{- '[INST]' + message['content'] + '[/INST]' }}
59
+ {%- elif message['content'] | length > 0 %}
60
+ {{- '[INST]' }}
61
+ {%- if message['content'] | length == 2 %}
62
+ {%- set blocks = message['content'] | sort(attribute='type') %}
63
+ {%- else %}
64
+ {%- set blocks = message['content'] %}
65
+ {%- endif %}
66
+ {%- for block in blocks %}
67
+ {%- if block['type'] == 'text' %}
68
+ {{- block['text'] }}
69
+ {%- elif block['type'] in ['image', 'image_url'] %}
70
+ {{- '[IMG]' }}
71
+ {%- else %}
72
+ {{- raise_exception('Only text, image and image_url chunks are supported in user message content.') }}
73
+ {%- endif %}
74
+ {%- endfor %}
75
+ {{- '[/INST]' }}
76
+ {%- else %}
77
+ {{- raise_exception('User message must have a string or a list of chunks in content') }}
78
+ {%- endif %}
79
+
80
+ {#- Assistant messages supports text content or text and image chunks. #}
81
+ {%- elif message['role'] == 'assistant' %}
82
+ {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %}
83
+ {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }}
84
+ {%- endif %}
85
+
86
+ {%- if message['content'] is string %}
87
+ {{- message['content'] }}
88
+ {%- elif message['content'] | length > 0 %}
89
+ {%- for block in message['content'] %}
90
+ {%- if block['type'] == 'text' %}
91
+ {{- block['text'] }}
92
+ {%- else %}
93
+ {{- raise_exception('Only text chunks are supported in assistant message contents.') }}
94
+ {%- endif %}
95
+ {%- endfor %}
96
+ {%- endif %}
97
+
98
+ {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %}
99
+ {%- for tool in message['tool_calls'] %}
100
+ {%- set arguments = tool['function']['arguments'] %}
101
+ {%- if arguments is not string %}
102
+ {%- set arguments = arguments|tojson|safe %}
103
+ {%- elif arguments == '' %}
104
+ {%- set arguments = '{}' %}
105
+ {%- endif %}
106
+ {{- '[TOOL_CALLS]' + tool['function']['name'] + '[ARGS]' + arguments }}
107
+ {%- endfor %}
108
+ {%- endif %}
109
+
110
+ {#- End of sequence token for each assistant messages. #}
111
+ {{- eos_token }}
112
+
113
+ {#- Tool messages only supports text content. #}
114
+ {%- elif message['role'] == 'tool' %}
115
+ {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }}
116
+
117
+ {#- Raise exception for unsupported roles. #}
118
+ {%- else %}
119
+ {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role']) }}
120
+ {%- endif %}
121
+ {%- endfor %}
checkpoint-100/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b328dde0d81e8c36db74f2f86c7341b981443f73b4f06fb82fe5319b341e9bc5
3
+ size 395621270
checkpoint-100/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:34717c6aeffd455f0dd834eb8a5336692adc3f38ddacce64806dcb83743fed0f
3
+ size 14244
checkpoint-100/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0674519d27dbbd2d728a1ccd5a57facacfb157b7c07b5f5a65d67f75b1983349
3
+ size 1064
checkpoint-100/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fbe698063980a09a4487ec5bbcc545aab380d686f9d918cad649bb49d257f83
3
+ size 17078265
checkpoint-100/tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": null,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<s>",
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "legacy": true,
9
+ "model_max_length": 1000000000000000019884624838656,
10
+ "pad_token": "<pad>",
11
+ "processor_class": "PixtralProcessor",
12
+ "tokenizer_class": "TokenizersBackend",
13
+ "unk_token": "<unk>",
14
+ "use_default_system_prompt": false
15
+ }
checkpoint-100/trainer_state.json ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_global_step": null,
3
+ "best_metric": null,
4
+ "best_model_checkpoint": null,
5
+ "epoch": 0.03290556103981573,
6
+ "eval_steps": 100,
7
+ "global_step": 100,
8
+ "is_hyper_param_search": false,
9
+ "is_local_process_zero": true,
10
+ "is_world_process_zero": true,
11
+ "log_history": [
12
+ {
13
+ "clip_ratio/high_max": 0.0,
14
+ "clip_ratio/high_mean": 0.0,
15
+ "clip_ratio/low_mean": 0.0,
16
+ "clip_ratio/low_min": 0.0,
17
+ "clip_ratio/region_mean": 0.0,
18
+ "completions/clipped_ratio": 0.8625,
19
+ "completions/max_length": 192.0,
20
+ "completions/max_terminated_length": 56.3,
21
+ "completions/mean_length": 171.4875,
22
+ "completions/mean_terminated_length": 42.87023868560791,
23
+ "completions/min_length": 90.4,
24
+ "completions/min_terminated_length": 32.8,
25
+ "entropy": 3.150948863662779,
26
+ "epoch": 0.003290556103981573,
27
+ "frac_reward_zero_std": 0.375,
28
+ "grad_norm": 0.07571335881948471,
29
+ "learning_rate": 4.891304347826088e-07,
30
+ "loss": -0.02059326320886612,
31
+ "num_tokens": 47670.0,
32
+ "reward": -0.44160416126251223,
33
+ "reward_std": 0.9783457338809967,
34
+ "rewards/reward_fn/mean": -0.44160416126251223,
35
+ "rewards/reward_fn/std": 0.9783458098769188,
36
+ "step": 10,
37
+ "step_time": 10.653649869200763
38
+ },
39
+ {
40
+ "clip_ratio/high_max": 0.0,
41
+ "clip_ratio/high_mean": 0.0,
42
+ "clip_ratio/low_mean": 0.0,
43
+ "clip_ratio/low_min": 0.0,
44
+ "clip_ratio/region_mean": 0.0,
45
+ "completions/clipped_ratio": 0.8625,
46
+ "completions/max_length": 192.0,
47
+ "completions/max_terminated_length": 96.3,
48
+ "completions/mean_length": 177.18125,
49
+ "completions/mean_terminated_length": 62.6,
50
+ "completions/min_length": 87.5,
51
+ "completions/min_terminated_length": 29.9,
52
+ "entropy": 3.170444035157561,
53
+ "epoch": 0.006581112207963146,
54
+ "frac_reward_zero_std": 0.325,
55
+ "grad_norm": 0.08302783966064453,
56
+ "learning_rate": 1.032608695652174e-06,
57
+ "loss": 0.02854466438293457,
58
+ "num_tokens": 86143.0,
59
+ "reward": -0.346833336353302,
60
+ "reward_std": 1.174746835231781,
61
+ "rewards/reward_fn/mean": -0.346833336353302,
62
+ "rewards/reward_fn/std": 1.174746859073639,
63
+ "step": 20,
64
+ "step_time": 10.326057229000071
65
+ },
66
+ {
67
+ "clip_ratio/high_max": 0.0,
68
+ "clip_ratio/high_mean": 0.0,
69
+ "clip_ratio/low_mean": 0.0,
70
+ "clip_ratio/low_min": 0.0,
71
+ "clip_ratio/region_mean": 0.0,
72
+ "completions/clipped_ratio": 0.8375,
73
+ "completions/max_length": 192.0,
74
+ "completions/max_terminated_length": 74.3,
75
+ "completions/mean_length": 170.80625,
76
+ "completions/mean_terminated_length": 37.34583358764648,
77
+ "completions/min_length": 77.7,
78
+ "completions/min_terminated_length": 20.1,
79
+ "entropy": 2.959633106784895,
80
+ "epoch": 0.009871668311944718,
81
+ "frac_reward_zero_std": 0.325,
82
+ "grad_norm": 0.0623028501868248,
83
+ "learning_rate": 1.5760869565217394e-06,
84
+ "loss": 0.0123717300593853,
85
+ "num_tokens": 127372.0,
86
+ "reward": -0.2614687532186508,
87
+ "reward_std": 1.187442547082901,
88
+ "rewards/reward_fn/mean": -0.2614687532186508,
89
+ "rewards/reward_fn/std": 1.1874426007270813,
90
+ "step": 30,
91
+ "step_time": 10.388007634900077
92
+ },
93
+ {
94
+ "clip_ratio/high_max": 0.0,
95
+ "clip_ratio/high_mean": 0.0,
96
+ "clip_ratio/low_mean": 0.0,
97
+ "clip_ratio/low_min": 0.0,
98
+ "clip_ratio/region_mean": 0.0,
99
+ "completions/clipped_ratio": 0.88125,
100
+ "completions/max_length": 192.0,
101
+ "completions/max_terminated_length": 96.0,
102
+ "completions/mean_length": 180.9125,
103
+ "completions/mean_terminated_length": 82.63333358764649,
104
+ "completions/min_length": 125.7,
105
+ "completions/min_terminated_length": 68.1,
106
+ "entropy": 3.1744957454502583,
107
+ "epoch": 0.013162224415926292,
108
+ "frac_reward_zero_std": 0.375,
109
+ "grad_norm": 0.07959338277578354,
110
+ "learning_rate": 2.1195652173913046e-06,
111
+ "loss": 0.02990533709526062,
112
+ "num_tokens": 166186.0,
113
+ "reward": -0.5520937621593476,
114
+ "reward_std": 0.9227306637912989,
115
+ "rewards/reward_fn/mean": -0.5520937621593476,
116
+ "rewards/reward_fn/std": 0.922730703651905,
117
+ "step": 40,
118
+ "step_time": 10.399955635900369
119
+ },
120
+ {
121
+ "clip_ratio/high_max": 0.0,
122
+ "clip_ratio/high_mean": 0.0,
123
+ "clip_ratio/low_mean": 0.0,
124
+ "clip_ratio/low_min": 0.0,
125
+ "clip_ratio/region_mean": 0.0,
126
+ "completions/clipped_ratio": 0.89375,
127
+ "completions/max_length": 192.0,
128
+ "completions/max_terminated_length": 37.2,
129
+ "completions/mean_length": 175.48125,
130
+ "completions/mean_terminated_length": 17.808333587646484,
131
+ "completions/min_length": 82.9,
132
+ "completions/min_terminated_length": 6.1,
133
+ "entropy": 3.2384878624230624,
134
+ "epoch": 0.016452780519907863,
135
+ "frac_reward_zero_std": 0.325,
136
+ "grad_norm": 0.11105164140462875,
137
+ "learning_rate": 2.6630434782608698e-06,
138
+ "loss": 0.012976028025150299,
139
+ "num_tokens": 210311.0,
140
+ "reward": -0.6406666725873947,
141
+ "reward_std": 0.9487585216760636,
142
+ "rewards/reward_fn/mean": -0.6406666725873947,
143
+ "rewards/reward_fn/std": 0.9487585410475731,
144
+ "step": 50,
145
+ "step_time": 10.453273071400963
146
+ },
147
+ {
148
+ "clip_ratio/high_max": 0.0,
149
+ "clip_ratio/high_mean": 0.0,
150
+ "clip_ratio/low_mean": 0.0,
151
+ "clip_ratio/low_min": 0.0,
152
+ "clip_ratio/region_mean": 0.0,
153
+ "completions/clipped_ratio": 0.85,
154
+ "completions/max_length": 192.0,
155
+ "completions/max_terminated_length": 115.2,
156
+ "completions/mean_length": 177.4125,
157
+ "completions/mean_terminated_length": 84.49166793823242,
158
+ "completions/min_length": 94.9,
159
+ "completions/min_terminated_length": 56.5,
160
+ "entropy": 3.319019041955471,
161
+ "epoch": 0.019743336623889437,
162
+ "frac_reward_zero_std": 0.425,
163
+ "grad_norm": 0.08200488239526749,
164
+ "learning_rate": 3.206521739130435e-06,
165
+ "loss": 0.013244372606277467,
166
+ "num_tokens": 252697.0,
167
+ "reward": -0.36183333694934844,
168
+ "reward_std": 1.0437055349349975,
169
+ "rewards/reward_fn/mean": -0.36183333694934844,
170
+ "rewards/reward_fn/std": 1.0437055587768556,
171
+ "step": 60,
172
+ "step_time": 10.409233478299665
173
+ },
174
+ {
175
+ "clip_ratio/high_max": 0.0,
176
+ "clip_ratio/high_mean": 0.0,
177
+ "clip_ratio/low_mean": 0.0,
178
+ "clip_ratio/low_min": 0.0,
179
+ "clip_ratio/region_mean": 0.0,
180
+ "completions/clipped_ratio": 0.8,
181
+ "completions/max_length": 192.0,
182
+ "completions/max_terminated_length": 80.8,
183
+ "completions/mean_length": 162.35,
184
+ "completions/mean_terminated_length": 46.82857220172882,
185
+ "completions/min_length": 46.5,
186
+ "completions/min_terminated_length": 27.3,
187
+ "entropy": 2.93527446128428,
188
+ "epoch": 0.02303389272787101,
189
+ "frac_reward_zero_std": 0.375,
190
+ "grad_norm": 0.052066221833229065,
191
+ "learning_rate": 3.7500000000000005e-06,
192
+ "loss": -0.018859776854515075,
193
+ "num_tokens": 301473.0,
194
+ "reward": -0.19455206990242005,
195
+ "reward_std": 1.0365907847881317,
196
+ "rewards/reward_fn/mean": -0.19455206990242005,
197
+ "rewards/reward_fn/std": 1.0365907967090606,
198
+ "step": 70,
199
+ "step_time": 10.572186487700037
200
+ },
201
+ {
202
+ "clip_ratio/high_max": 0.0,
203
+ "clip_ratio/high_mean": 0.0,
204
+ "clip_ratio/low_mean": 0.0,
205
+ "clip_ratio/low_min": 0.0,
206
+ "clip_ratio/region_mean": 0.0,
207
+ "completions/clipped_ratio": 0.91875,
208
+ "completions/max_length": 192.0,
209
+ "completions/max_terminated_length": 39.2,
210
+ "completions/mean_length": 180.73125,
211
+ "completions/mean_terminated_length": 28.185714721679688,
212
+ "completions/min_length": 116.8,
213
+ "completions/min_terminated_length": 20.8,
214
+ "entropy": 3.1717873765155673,
215
+ "epoch": 0.026324448831852584,
216
+ "frac_reward_zero_std": 0.375,
217
+ "grad_norm": 0.08918585628271103,
218
+ "learning_rate": 4.293478260869566e-06,
219
+ "loss": -0.029803618788719177,
220
+ "num_tokens": 351182.0,
221
+ "reward": -0.14356249272823335,
222
+ "reward_std": 1.2337665796279906,
223
+ "rewards/reward_fn/mean": -0.14356249272823335,
224
+ "rewards/reward_fn/std": 1.2337666153907776,
225
+ "step": 80,
226
+ "step_time": 10.542100278999987
227
+ },
228
+ {
229
+ "clip_ratio/high_max": 0.0,
230
+ "clip_ratio/high_mean": 0.0,
231
+ "clip_ratio/low_mean": 0.0,
232
+ "clip_ratio/low_min": 0.0,
233
+ "clip_ratio/region_mean": 0.0,
234
+ "completions/clipped_ratio": 0.8875,
235
+ "completions/max_length": 192.0,
236
+ "completions/max_terminated_length": 62.5,
237
+ "completions/mean_length": 177.99375,
238
+ "completions/mean_terminated_length": 47.50833358764648,
239
+ "completions/min_length": 93.4,
240
+ "completions/min_terminated_length": 35.8,
241
+ "entropy": 3.3330336447805164,
242
+ "epoch": 0.029615004935834157,
243
+ "frac_reward_zero_std": 0.4,
244
+ "grad_norm": 0.09377003461122513,
245
+ "learning_rate": 4.836956521739131e-06,
246
+ "loss": 0.020368821918964386,
247
+ "num_tokens": 392141.0,
248
+ "reward": -0.3047604262828827,
249
+ "reward_std": 1.0662935938686133,
250
+ "rewards/reward_fn/mean": -0.3047604262828827,
251
+ "rewards/reward_fn/std": 1.0662936087697745,
252
+ "step": 90,
253
+ "step_time": 10.53784074459918
254
+ },
255
+ {
256
+ "clip_ratio/high_max": 0.0,
257
+ "clip_ratio/high_mean": 0.0,
258
+ "clip_ratio/low_mean": 0.0,
259
+ "clip_ratio/low_min": 0.0,
260
+ "clip_ratio/region_mean": 0.0,
261
+ "completions/clipped_ratio": 0.91875,
262
+ "completions/max_length": 192.0,
263
+ "completions/max_terminated_length": 52.8,
264
+ "completions/mean_length": 181.1875,
265
+ "completions/mean_terminated_length": 45.35,
266
+ "completions/min_length": 96.1,
267
+ "completions/min_terminated_length": 38.5,
268
+ "entropy": 3.1780415017157795,
269
+ "epoch": 0.03290556103981573,
270
+ "frac_reward_zero_std": 0.45,
271
+ "grad_norm": 0.06737760454416275,
272
+ "learning_rate": 4.98812351543943e-06,
273
+ "loss": 0.0005840381607413291,
274
+ "num_tokens": 434979.0,
275
+ "reward": -0.03712502121925354,
276
+ "reward_std": 1.2626685500144958,
277
+ "rewards/reward_fn/mean": -0.03712502121925354,
278
+ "rewards/reward_fn/std": 1.2626685857772828,
279
+ "step": 100,
280
+ "step_time": 10.653271522400246
281
+ },
282
+ {
283
+ "epoch": 0.03290556103981573,
284
+ "eval_clip_ratio/high_max": 0.0,
285
+ "eval_clip_ratio/high_mean": 0.0,
286
+ "eval_clip_ratio/low_mean": 0.0,
287
+ "eval_clip_ratio/low_min": 0.0,
288
+ "eval_clip_ratio/region_mean": 0.0,
289
+ "eval_completions/clipped_ratio": 0.865,
290
+ "eval_completions/max_length": 191.32,
291
+ "eval_completions/max_terminated_length": 35.192,
292
+ "eval_completions/mean_length": 173.809,
293
+ "eval_completions/mean_terminated_length": 25.483600143432618,
294
+ "eval_completions/min_length": 133.76,
295
+ "eval_completions/min_terminated_length": 18.56,
296
+ "eval_entropy": 3.1887310934066773,
297
+ "eval_frac_reward_zero_std": 0.364,
298
+ "eval_loss": 0.00628992123529315,
299
+ "eval_num_tokens": 434979.0,
300
+ "eval_reward": -0.29314000272750856,
301
+ "eval_reward_std": 0.8848717106580735,
302
+ "eval_rewards/reward_fn/mean": -0.29314000272750856,
303
+ "eval_rewards/reward_fn/std": 0.8848717320859433,
304
+ "eval_runtime": 832.8061,
305
+ "eval_samples_per_second": 0.299,
306
+ "eval_steps_per_second": 0.038,
307
+ "step": 100
308
+ }
309
+ ],
310
+ "logging_steps": 10,
311
+ "max_steps": 3039,
312
+ "num_input_tokens_seen": 434979,
313
+ "num_train_epochs": 1,
314
+ "save_steps": 100,
315
+ "stateful_callbacks": {
316
+ "TrainerControl": {
317
+ "args": {
318
+ "should_epoch_stop": false,
319
+ "should_evaluate": false,
320
+ "should_log": false,
321
+ "should_save": true,
322
+ "should_training_stop": false
323
+ },
324
+ "attributes": {}
325
+ }
326
+ },
327
+ "total_flos": 0.0,
328
+ "train_batch_size": 1,
329
+ "trial_name": null,
330
+ "trial_params": null
331
+ }
checkpoint-100/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2c4a7d589a303b75f5a9711e2b12b41fa2b08921854a2742cce74a124cc0317
3
+ size 6584
checkpoint-1000/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Ministral-3-3B-Instruct-2512-BF16
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:mistralai/Ministral-3-3B-Instruct-2512-BF16
7
+ - grpo
8
+ - lora
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.1
checkpoint-1000/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mistralai/Ministral-3-3B-Instruct-2512-BF16",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 64,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 32,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "o_proj",
33
+ "gate_proj",
34
+ "q_proj",
35
+ "up_proj",
36
+ "v_proj",
37
+ "k_proj",
38
+ "down_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
checkpoint-1000/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:589a8b18f8fcee937d4a89ead044a8cbb4f4789dd942f236190f129f796dcd6f
3
+ size 270117632
checkpoint-1000/chat_template.jinja ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {#- Default system message if no system prompt is passed. #}
2
+ {%- set default_system_message = 'You are Ministral-3-3B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is {today}.\n\nWhen you\'re not sure about some information or when the user\'s request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don\'t have the information and avoid making up anything.\nIf the user\'s question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").\nYou are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.\nYou follow these instructions in all languages, and always respond to the user in the language they use or request.\nNext sections describe the capabilities that you have.\n\n# WEB BROWSING INSTRUCTIONS\n\nYou cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.\n\n# MULTI-MODAL INSTRUCTIONS\n\nYou have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.\nYou cannot read nor transcribe audio files or videos.\n\n# TOOL CALLING INSTRUCTIONS\n\nYou may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:\n\n1. When the request requires up-to-date information.\n2. When the request requires specific data that you do not have in your knowledge base.\n3. When the request involves actions that you cannot perform without tools.\n\nAlways prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.' %}
3
+
4
+ {#- Begin of sequence token. #}
5
+ {{- bos_token }}
6
+
7
+ {#- Handle system prompt if it exists. #}
8
+ {#- System prompt supports text content or text chunks. #}
9
+ {%- if messages[0]['role'] == 'system' %}
10
+ {{- '[SYSTEM_PROMPT]' -}}
11
+ {%- if messages[0]['content'] is string %}
12
+ {{- messages[0]['content'] -}}
13
+ {%- else %}
14
+ {%- for block in messages[0]['content'] %}
15
+ {%- if block['type'] == 'text' %}
16
+ {{- block['text'] }}
17
+ {%- else %}
18
+ {{- raise_exception('Only text chunks are supported in system message contents.') }}
19
+ {%- endif %}
20
+ {%- endfor %}
21
+ {%- endif %}
22
+ {{- '[/SYSTEM_PROMPT]' -}}
23
+ {%- set loop_messages = messages[1:] %}
24
+ {%- else %}
25
+ {%- set loop_messages = messages %}
26
+ {%- if default_system_message != '' %}
27
+ {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }}
28
+ {%- endif %}
29
+ {%- endif %}
30
+
31
+
32
+ {#- Tools definition #}
33
+ {%- set tools_definition = '' %}
34
+ {%- set has_tools = false %}
35
+ {%- if tools is defined and tools is not none and tools|length > 0 %}
36
+ {%- set has_tools = true %}
37
+ {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %}
38
+ {{- tools_definition }}
39
+ {%- endif %}
40
+
41
+ {#- Checks for alternating user/assistant messages. #}
42
+ {%- set ns = namespace(index=0) %}
43
+ {%- for message in loop_messages %}
44
+ {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %}
45
+ {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %}
46
+ {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }}
47
+ {%- endif %}
48
+ {%- set ns.index = ns.index + 1 %}
49
+ {%- endif %}
50
+ {%- endfor %}
51
+
52
+ {#- Handle conversation messages. #}
53
+ {%- for message in loop_messages %}
54
+
55
+ {#- User messages supports text content or text and image chunks. #}
56
+ {%- if message['role'] == 'user' %}
57
+ {%- if message['content'] is string %}
58
+ {{- '[INST]' + message['content'] + '[/INST]' }}
59
+ {%- elif message['content'] | length > 0 %}
60
+ {{- '[INST]' }}
61
+ {%- if message['content'] | length == 2 %}
62
+ {%- set blocks = message['content'] | sort(attribute='type') %}
63
+ {%- else %}
64
+ {%- set blocks = message['content'] %}
65
+ {%- endif %}
66
+ {%- for block in blocks %}
67
+ {%- if block['type'] == 'text' %}
68
+ {{- block['text'] }}
69
+ {%- elif block['type'] in ['image', 'image_url'] %}
70
+ {{- '[IMG]' }}
71
+ {%- else %}
72
+ {{- raise_exception('Only text, image and image_url chunks are supported in user message content.') }}
73
+ {%- endif %}
74
+ {%- endfor %}
75
+ {{- '[/INST]' }}
76
+ {%- else %}
77
+ {{- raise_exception('User message must have a string or a list of chunks in content') }}
78
+ {%- endif %}
79
+
80
+ {#- Assistant messages supports text content or text and image chunks. #}
81
+ {%- elif message['role'] == 'assistant' %}
82
+ {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %}
83
+ {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }}
84
+ {%- endif %}
85
+
86
+ {%- if message['content'] is string %}
87
+ {{- message['content'] }}
88
+ {%- elif message['content'] | length > 0 %}
89
+ {%- for block in message['content'] %}
90
+ {%- if block['type'] == 'text' %}
91
+ {{- block['text'] }}
92
+ {%- else %}
93
+ {{- raise_exception('Only text chunks are supported in assistant message contents.') }}
94
+ {%- endif %}
95
+ {%- endfor %}
96
+ {%- endif %}
97
+
98
+ {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %}
99
+ {%- for tool in message['tool_calls'] %}
100
+ {%- set arguments = tool['function']['arguments'] %}
101
+ {%- if arguments is not string %}
102
+ {%- set arguments = arguments|tojson|safe %}
103
+ {%- elif arguments == '' %}
104
+ {%- set arguments = '{}' %}
105
+ {%- endif %}
106
+ {{- '[TOOL_CALLS]' + tool['function']['name'] + '[ARGS]' + arguments }}
107
+ {%- endfor %}
108
+ {%- endif %}
109
+
110
+ {#- End of sequence token for each assistant messages. #}
111
+ {{- eos_token }}
112
+
113
+ {#- Tool messages only supports text content. #}
114
+ {%- elif message['role'] == 'tool' %}
115
+ {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }}
116
+
117
+ {#- Raise exception for unsupported roles. #}
118
+ {%- else %}
119
+ {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role']) }}
120
+ {%- endif %}
121
+ {%- endfor %}
checkpoint-1000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12b4889283a60693e0350f41d1c35da2afe6bf9b52ddd9e4638359ac8b1b35e1
3
+ size 395621270
checkpoint-1000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1386eef4b1eb028094ec4aa5b922bfa06a81c1a2af2e30e24f664112ea8c2a97
3
+ size 14244
checkpoint-1000/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3500052158a277b5f1ebe6118bfdf81b3aa66b313b0687ab8bd978fbde8276fc
3
+ size 1064
checkpoint-1000/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fbe698063980a09a4487ec5bbcc545aab380d686f9d918cad649bb49d257f83
3
+ size 17078265
checkpoint-1000/tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": null,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<s>",
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "legacy": true,
9
+ "model_max_length": 1000000000000000019884624838656,
10
+ "pad_token": "<pad>",
11
+ "processor_class": "PixtralProcessor",
12
+ "tokenizer_class": "TokenizersBackend",
13
+ "unk_token": "<unk>",
14
+ "use_default_system_prompt": false
15
+ }
checkpoint-1000/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-1000/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2c4a7d589a303b75f5a9711e2b12b41fa2b08921854a2742cce74a124cc0317
3
+ size 6584
checkpoint-1100/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Ministral-3-3B-Instruct-2512-BF16
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:mistralai/Ministral-3-3B-Instruct-2512-BF16
7
+ - grpo
8
+ - lora
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.1
checkpoint-1100/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mistralai/Ministral-3-3B-Instruct-2512-BF16",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 64,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 32,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "o_proj",
33
+ "gate_proj",
34
+ "q_proj",
35
+ "up_proj",
36
+ "v_proj",
37
+ "k_proj",
38
+ "down_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
checkpoint-1100/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bf349119a9fbcd2719e7f73ced353658d179dfd8590f8e2f2767c992a9f3bd9
3
+ size 270117632
checkpoint-1100/chat_template.jinja ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {#- Default system message if no system prompt is passed. #}
2
+ {%- set default_system_message = 'You are Ministral-3-3B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is {today}.\n\nWhen you\'re not sure about some information or when the user\'s request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don\'t have the information and avoid making up anything.\nIf the user\'s question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").\nYou are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.\nYou follow these instructions in all languages, and always respond to the user in the language they use or request.\nNext sections describe the capabilities that you have.\n\n# WEB BROWSING INSTRUCTIONS\n\nYou cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.\n\n# MULTI-MODAL INSTRUCTIONS\n\nYou have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.\nYou cannot read nor transcribe audio files or videos.\n\n# TOOL CALLING INSTRUCTIONS\n\nYou may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:\n\n1. When the request requires up-to-date information.\n2. When the request requires specific data that you do not have in your knowledge base.\n3. When the request involves actions that you cannot perform without tools.\n\nAlways prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.' %}
3
+
4
+ {#- Begin of sequence token. #}
5
+ {{- bos_token }}
6
+
7
+ {#- Handle system prompt if it exists. #}
8
+ {#- System prompt supports text content or text chunks. #}
9
+ {%- if messages[0]['role'] == 'system' %}
10
+ {{- '[SYSTEM_PROMPT]' -}}
11
+ {%- if messages[0]['content'] is string %}
12
+ {{- messages[0]['content'] -}}
13
+ {%- else %}
14
+ {%- for block in messages[0]['content'] %}
15
+ {%- if block['type'] == 'text' %}
16
+ {{- block['text'] }}
17
+ {%- else %}
18
+ {{- raise_exception('Only text chunks are supported in system message contents.') }}
19
+ {%- endif %}
20
+ {%- endfor %}
21
+ {%- endif %}
22
+ {{- '[/SYSTEM_PROMPT]' -}}
23
+ {%- set loop_messages = messages[1:] %}
24
+ {%- else %}
25
+ {%- set loop_messages = messages %}
26
+ {%- if default_system_message != '' %}
27
+ {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }}
28
+ {%- endif %}
29
+ {%- endif %}
30
+
31
+
32
+ {#- Tools definition #}
33
+ {%- set tools_definition = '' %}
34
+ {%- set has_tools = false %}
35
+ {%- if tools is defined and tools is not none and tools|length > 0 %}
36
+ {%- set has_tools = true %}
37
+ {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %}
38
+ {{- tools_definition }}
39
+ {%- endif %}
40
+
41
+ {#- Checks for alternating user/assistant messages. #}
42
+ {%- set ns = namespace(index=0) %}
43
+ {%- for message in loop_messages %}
44
+ {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %}
45
+ {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %}
46
+ {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }}
47
+ {%- endif %}
48
+ {%- set ns.index = ns.index + 1 %}
49
+ {%- endif %}
50
+ {%- endfor %}
51
+
52
+ {#- Handle conversation messages. #}
53
+ {%- for message in loop_messages %}
54
+
55
+ {#- User messages supports text content or text and image chunks. #}
56
+ {%- if message['role'] == 'user' %}
57
+ {%- if message['content'] is string %}
58
+ {{- '[INST]' + message['content'] + '[/INST]' }}
59
+ {%- elif message['content'] | length > 0 %}
60
+ {{- '[INST]' }}
61
+ {%- if message['content'] | length == 2 %}
62
+ {%- set blocks = message['content'] | sort(attribute='type') %}
63
+ {%- else %}
64
+ {%- set blocks = message['content'] %}
65
+ {%- endif %}
66
+ {%- for block in blocks %}
67
+ {%- if block['type'] == 'text' %}
68
+ {{- block['text'] }}
69
+ {%- elif block['type'] in ['image', 'image_url'] %}
70
+ {{- '[IMG]' }}
71
+ {%- else %}
72
+ {{- raise_exception('Only text, image and image_url chunks are supported in user message content.') }}
73
+ {%- endif %}
74
+ {%- endfor %}
75
+ {{- '[/INST]' }}
76
+ {%- else %}
77
+ {{- raise_exception('User message must have a string or a list of chunks in content') }}
78
+ {%- endif %}
79
+
80
+ {#- Assistant messages supports text content or text and image chunks. #}
81
+ {%- elif message['role'] == 'assistant' %}
82
+ {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %}
83
+ {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }}
84
+ {%- endif %}
85
+
86
+ {%- if message['content'] is string %}
87
+ {{- message['content'] }}
88
+ {%- elif message['content'] | length > 0 %}
89
+ {%- for block in message['content'] %}
90
+ {%- if block['type'] == 'text' %}
91
+ {{- block['text'] }}
92
+ {%- else %}
93
+ {{- raise_exception('Only text chunks are supported in assistant message contents.') }}
94
+ {%- endif %}
95
+ {%- endfor %}
96
+ {%- endif %}
97
+
98
+ {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %}
99
+ {%- for tool in message['tool_calls'] %}
100
+ {%- set arguments = tool['function']['arguments'] %}
101
+ {%- if arguments is not string %}
102
+ {%- set arguments = arguments|tojson|safe %}
103
+ {%- elif arguments == '' %}
104
+ {%- set arguments = '{}' %}
105
+ {%- endif %}
106
+ {{- '[TOOL_CALLS]' + tool['function']['name'] + '[ARGS]' + arguments }}
107
+ {%- endfor %}
108
+ {%- endif %}
109
+
110
+ {#- End of sequence token for each assistant messages. #}
111
+ {{- eos_token }}
112
+
113
+ {#- Tool messages only supports text content. #}
114
+ {%- elif message['role'] == 'tool' %}
115
+ {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }}
116
+
117
+ {#- Raise exception for unsupported roles. #}
118
+ {%- else %}
119
+ {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role']) }}
120
+ {%- endif %}
121
+ {%- endfor %}
checkpoint-1100/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d397f9f6dbd090971a9270d0ecf828ff3be609d26aa4cd987c7528832100ec6d
3
+ size 395621270
checkpoint-1100/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f88297dd2786b93401d8efad5ce69344a392ab1006d13144db1dc12825112f3d
3
+ size 14244
checkpoint-1100/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca47c5279d99f145d161d9dccde90d4a61729e82e6a70e588362cd372353e56f
3
+ size 1064
checkpoint-1100/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fbe698063980a09a4487ec5bbcc545aab380d686f9d918cad649bb49d257f83
3
+ size 17078265
checkpoint-1100/tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": null,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<s>",
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "legacy": true,
9
+ "model_max_length": 1000000000000000019884624838656,
10
+ "pad_token": "<pad>",
11
+ "processor_class": "PixtralProcessor",
12
+ "tokenizer_class": "TokenizersBackend",
13
+ "unk_token": "<unk>",
14
+ "use_default_system_prompt": false
15
+ }
checkpoint-1100/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-1100/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2c4a7d589a303b75f5a9711e2b12b41fa2b08921854a2742cce74a124cc0317
3
+ size 6584
checkpoint-1200/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Ministral-3-3B-Instruct-2512-BF16
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:mistralai/Ministral-3-3B-Instruct-2512-BF16
7
+ - grpo
8
+ - lora
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.1
checkpoint-1200/adapter_config.json ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mistralai/Ministral-3-3B-Instruct-2512-BF16",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 64,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.05,
22
+ "megatron_config": null,
23
+ "megatron_core": "megatron.core",
24
+ "modules_to_save": null,
25
+ "peft_type": "LORA",
26
+ "peft_version": "0.18.1",
27
+ "qalora_group_size": 16,
28
+ "r": 32,
29
+ "rank_pattern": {},
30
+ "revision": null,
31
+ "target_modules": [
32
+ "o_proj",
33
+ "gate_proj",
34
+ "q_proj",
35
+ "up_proj",
36
+ "v_proj",
37
+ "k_proj",
38
+ "down_proj"
39
+ ],
40
+ "target_parameters": null,
41
+ "task_type": "CAUSAL_LM",
42
+ "trainable_token_indices": null,
43
+ "use_dora": false,
44
+ "use_qalora": false,
45
+ "use_rslora": false
46
+ }
checkpoint-1200/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5008113c7e7a6ffd37e99495609ad6d590a357094bbba9f982bd3cfe63c68d9
3
+ size 270117632
checkpoint-1200/chat_template.jinja ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {#- Default system message if no system prompt is passed. #}
2
+ {%- set default_system_message = 'You are Ministral-3-3B-Instruct-2512, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is {today}.\n\nWhen you\'re not sure about some information or when the user\'s request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don\'t have the information and avoid making up anything.\nIf the user\'s question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?").\nYou are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date.\nYou follow these instructions in all languages, and always respond to the user in the language they use or request.\nNext sections describe the capabilities that you have.\n\n# WEB BROWSING INSTRUCTIONS\n\nYou cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.\n\n# MULTI-MODAL INSTRUCTIONS\n\nYou have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.\nYou cannot read nor transcribe audio files or videos.\n\n# TOOL CALLING INSTRUCTIONS\n\nYou may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:\n\n1. When the request requires up-to-date information.\n2. When the request requires specific data that you do not have in your knowledge base.\n3. When the request involves actions that you cannot perform without tools.\n\nAlways prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.' %}
3
+
4
+ {#- Begin of sequence token. #}
5
+ {{- bos_token }}
6
+
7
+ {#- Handle system prompt if it exists. #}
8
+ {#- System prompt supports text content or text chunks. #}
9
+ {%- if messages[0]['role'] == 'system' %}
10
+ {{- '[SYSTEM_PROMPT]' -}}
11
+ {%- if messages[0]['content'] is string %}
12
+ {{- messages[0]['content'] -}}
13
+ {%- else %}
14
+ {%- for block in messages[0]['content'] %}
15
+ {%- if block['type'] == 'text' %}
16
+ {{- block['text'] }}
17
+ {%- else %}
18
+ {{- raise_exception('Only text chunks are supported in system message contents.') }}
19
+ {%- endif %}
20
+ {%- endfor %}
21
+ {%- endif %}
22
+ {{- '[/SYSTEM_PROMPT]' -}}
23
+ {%- set loop_messages = messages[1:] %}
24
+ {%- else %}
25
+ {%- set loop_messages = messages %}
26
+ {%- if default_system_message != '' %}
27
+ {{- '[SYSTEM_PROMPT]' + default_system_message + '[/SYSTEM_PROMPT]' }}
28
+ {%- endif %}
29
+ {%- endif %}
30
+
31
+
32
+ {#- Tools definition #}
33
+ {%- set tools_definition = '' %}
34
+ {%- set has_tools = false %}
35
+ {%- if tools is defined and tools is not none and tools|length > 0 %}
36
+ {%- set has_tools = true %}
37
+ {%- set tools_definition = '[AVAILABLE_TOOLS]' + (tools| tojson) + '[/AVAILABLE_TOOLS]' %}
38
+ {{- tools_definition }}
39
+ {%- endif %}
40
+
41
+ {#- Checks for alternating user/assistant messages. #}
42
+ {%- set ns = namespace(index=0) %}
43
+ {%- for message in loop_messages %}
44
+ {%- if message.role == 'user' or (message.role == 'assistant' and (message.tool_calls is not defined or message.tool_calls is none or message.tool_calls | length == 0)) %}
45
+ {%- if (message['role'] == 'user') != (ns.index % 2 == 0) %}
46
+ {{- raise_exception('After the optional system message, conversation roles must alternate user and assistant roles except for tool calls and results.') }}
47
+ {%- endif %}
48
+ {%- set ns.index = ns.index + 1 %}
49
+ {%- endif %}
50
+ {%- endfor %}
51
+
52
+ {#- Handle conversation messages. #}
53
+ {%- for message in loop_messages %}
54
+
55
+ {#- User messages supports text content or text and image chunks. #}
56
+ {%- if message['role'] == 'user' %}
57
+ {%- if message['content'] is string %}
58
+ {{- '[INST]' + message['content'] + '[/INST]' }}
59
+ {%- elif message['content'] | length > 0 %}
60
+ {{- '[INST]' }}
61
+ {%- if message['content'] | length == 2 %}
62
+ {%- set blocks = message['content'] | sort(attribute='type') %}
63
+ {%- else %}
64
+ {%- set blocks = message['content'] %}
65
+ {%- endif %}
66
+ {%- for block in blocks %}
67
+ {%- if block['type'] == 'text' %}
68
+ {{- block['text'] }}
69
+ {%- elif block['type'] in ['image', 'image_url'] %}
70
+ {{- '[IMG]' }}
71
+ {%- else %}
72
+ {{- raise_exception('Only text, image and image_url chunks are supported in user message content.') }}
73
+ {%- endif %}
74
+ {%- endfor %}
75
+ {{- '[/INST]' }}
76
+ {%- else %}
77
+ {{- raise_exception('User message must have a string or a list of chunks in content') }}
78
+ {%- endif %}
79
+
80
+ {#- Assistant messages supports text content or text and image chunks. #}
81
+ {%- elif message['role'] == 'assistant' %}
82
+ {%- if (message['content'] is none or message['content'] == '' or message['content']|length == 0) and (message['tool_calls'] is not defined or message['tool_calls'] is none or message['tool_calls']|length == 0) %}
83
+ {{- raise_exception('Assistant message must have a string or a list of chunks in content or a list of tool calls.') }}
84
+ {%- endif %}
85
+
86
+ {%- if message['content'] is string %}
87
+ {{- message['content'] }}
88
+ {%- elif message['content'] | length > 0 %}
89
+ {%- for block in message['content'] %}
90
+ {%- if block['type'] == 'text' %}
91
+ {{- block['text'] }}
92
+ {%- else %}
93
+ {{- raise_exception('Only text chunks are supported in assistant message contents.') }}
94
+ {%- endif %}
95
+ {%- endfor %}
96
+ {%- endif %}
97
+
98
+ {%- if message['tool_calls'] is defined and message['tool_calls'] is not none and message['tool_calls']|length > 0 %}
99
+ {%- for tool in message['tool_calls'] %}
100
+ {%- set arguments = tool['function']['arguments'] %}
101
+ {%- if arguments is not string %}
102
+ {%- set arguments = arguments|tojson|safe %}
103
+ {%- elif arguments == '' %}
104
+ {%- set arguments = '{}' %}
105
+ {%- endif %}
106
+ {{- '[TOOL_CALLS]' + tool['function']['name'] + '[ARGS]' + arguments }}
107
+ {%- endfor %}
108
+ {%- endif %}
109
+
110
+ {#- End of sequence token for each assistant messages. #}
111
+ {{- eos_token }}
112
+
113
+ {#- Tool messages only supports text content. #}
114
+ {%- elif message['role'] == 'tool' %}
115
+ {{- '[TOOL_RESULTS]' + message['content']|string + '[/TOOL_RESULTS]' }}
116
+
117
+ {#- Raise exception for unsupported roles. #}
118
+ {%- else %}
119
+ {{- raise_exception('Only user, assistant and tool roles are supported, got ' + message['role']) }}
120
+ {%- endif %}
121
+ {%- endfor %}
checkpoint-1200/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:545dc6185479361be79f43c5fb074fe702598e64a2008c514d8c844f74377194
3
+ size 395621270
checkpoint-1200/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb6957d7690d3648a34bdddf072ee706ca2c02365f5e39cc34bff2e55013f307
3
+ size 14244
checkpoint-1200/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38f60de3eb475cca0cbee345673dbe98d6cc0d0fe9b076e990ea9050a5d232f4
3
+ size 1064
checkpoint-1200/tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fbe698063980a09a4487ec5bbcc545aab380d686f9d918cad649bb49d257f83
3
+ size 17078265
checkpoint-1200/tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": null,
3
+ "backend": "tokenizers",
4
+ "bos_token": "<s>",
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "</s>",
7
+ "is_local": false,
8
+ "legacy": true,
9
+ "model_max_length": 1000000000000000019884624838656,
10
+ "pad_token": "<pad>",
11
+ "processor_class": "PixtralProcessor",
12
+ "tokenizer_class": "TokenizersBackend",
13
+ "unk_token": "<unk>",
14
+ "use_default_system_prompt": false
15
+ }
checkpoint-1200/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-1200/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2c4a7d589a303b75f5a9711e2b12b41fa2b08921854a2742cce74a124cc0317
3
+ size 6584
checkpoint-1300/README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Ministral-3-3B-Instruct-2512-BF16
3
+ library_name: peft
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - base_model:adapter:mistralai/Ministral-3-3B-Instruct-2512-BF16
7
+ - grpo
8
+ - lora
9
+ - transformers
10
+ - trl
11
+ ---
12
+
13
+ # Model Card for Model ID
14
+
15
+ <!-- Provide a quick summary of what the model is/does. -->
16
+
17
+
18
+
19
+ ## Model Details
20
+
21
+ ### Model Description
22
+
23
+ <!-- Provide a longer summary of what this model is. -->
24
+
25
+
26
+
27
+ - **Developed by:** [More Information Needed]
28
+ - **Funded by [optional]:** [More Information Needed]
29
+ - **Shared by [optional]:** [More Information Needed]
30
+ - **Model type:** [More Information Needed]
31
+ - **Language(s) (NLP):** [More Information Needed]
32
+ - **License:** [More Information Needed]
33
+ - **Finetuned from model [optional]:** [More Information Needed]
34
+
35
+ ### Model Sources [optional]
36
+
37
+ <!-- Provide the basic links for the model. -->
38
+
39
+ - **Repository:** [More Information Needed]
40
+ - **Paper [optional]:** [More Information Needed]
41
+ - **Demo [optional]:** [More Information Needed]
42
+
43
+ ## Uses
44
+
45
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
46
+
47
+ ### Direct Use
48
+
49
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
50
+
51
+ [More Information Needed]
52
+
53
+ ### Downstream Use [optional]
54
+
55
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
56
+
57
+ [More Information Needed]
58
+
59
+ ### Out-of-Scope Use
60
+
61
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
62
+
63
+ [More Information Needed]
64
+
65
+ ## Bias, Risks, and Limitations
66
+
67
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
68
+
69
+ [More Information Needed]
70
+
71
+ ### Recommendations
72
+
73
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
74
+
75
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
76
+
77
+ ## How to Get Started with the Model
78
+
79
+ Use the code below to get started with the model.
80
+
81
+ [More Information Needed]
82
+
83
+ ## Training Details
84
+
85
+ ### Training Data
86
+
87
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
88
+
89
+ [More Information Needed]
90
+
91
+ ### Training Procedure
92
+
93
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
94
+
95
+ #### Preprocessing [optional]
96
+
97
+ [More Information Needed]
98
+
99
+
100
+ #### Training Hyperparameters
101
+
102
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
103
+
104
+ #### Speeds, Sizes, Times [optional]
105
+
106
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
107
+
108
+ [More Information Needed]
109
+
110
+ ## Evaluation
111
+
112
+ <!-- This section describes the evaluation protocols and provides the results. -->
113
+
114
+ ### Testing Data, Factors & Metrics
115
+
116
+ #### Testing Data
117
+
118
+ <!-- This should link to a Dataset Card if possible. -->
119
+
120
+ [More Information Needed]
121
+
122
+ #### Factors
123
+
124
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
125
+
126
+ [More Information Needed]
127
+
128
+ #### Metrics
129
+
130
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
131
+
132
+ [More Information Needed]
133
+
134
+ ### Results
135
+
136
+ [More Information Needed]
137
+
138
+ #### Summary
139
+
140
+
141
+
142
+ ## Model Examination [optional]
143
+
144
+ <!-- Relevant interpretability work for the model goes here -->
145
+
146
+ [More Information Needed]
147
+
148
+ ## Environmental Impact
149
+
150
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
151
+
152
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
153
+
154
+ - **Hardware Type:** [More Information Needed]
155
+ - **Hours used:** [More Information Needed]
156
+ - **Cloud Provider:** [More Information Needed]
157
+ - **Compute Region:** [More Information Needed]
158
+ - **Carbon Emitted:** [More Information Needed]
159
+
160
+ ## Technical Specifications [optional]
161
+
162
+ ### Model Architecture and Objective
163
+
164
+ [More Information Needed]
165
+
166
+ ### Compute Infrastructure
167
+
168
+ [More Information Needed]
169
+
170
+ #### Hardware
171
+
172
+ [More Information Needed]
173
+
174
+ #### Software
175
+
176
+ [More Information Needed]
177
+
178
+ ## Citation [optional]
179
+
180
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
181
+
182
+ **BibTeX:**
183
+
184
+ [More Information Needed]
185
+
186
+ **APA:**
187
+
188
+ [More Information Needed]
189
+
190
+ ## Glossary [optional]
191
+
192
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
193
+
194
+ [More Information Needed]
195
+
196
+ ## More Information [optional]
197
+
198
+ [More Information Needed]
199
+
200
+ ## Model Card Authors [optional]
201
+
202
+ [More Information Needed]
203
+
204
+ ## Model Card Contact
205
+
206
+ [More Information Needed]
207
+ ### Framework versions
208
+
209
+ - PEFT 0.18.1