--- library_name: transformers base_model: ZeroAgency/Mistral-Small-3.2-24B-Instruct-2506-Text-Only tags: - generated_from_trainer datasets: - bethrezen/grandmaster2-gemini2.5-mixed-81k - ZeroAgency/ru-big-russian-dataset-v1.1 - ZeroAgency/hybrid_reasoning_dataset_ru-no-nebo-with-system-prompt - bethrezen/shkolkovo-2 - bethrezen/mera-2 - bethrezen/mera-2 model-index: - name: outputs/zero-mistral-beta50 results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.10.0` ```yaml # zero-mistral-beta50 - big-russian-1.1 + MERA 2 epoch + grandmaster2-mixed-81k base_model: ZeroAgency/Mistral-Small-3.2-24B-Instruct-2506-Text-Only dataset_processes: 128 chat_template: jinja chat_template_jinja: "{%- set today = strftime_now(\"%Y-%m-%d\") %}\n{%- set default_system_message = \"You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.\nYou power an AI assistant called Le Chat.\nYour knowledge base was last updated on 2023-10-01.\nThe current date is {today}.\n\nWhen you're not sure about some information or when the user's request requires up-to-date or specific data, you must use the available tools to fetch the information. Do not hesitate to use tools whenever they can provide a more accurate or complete response. If no relevant tools are available, then clearly state that you don't have the information and avoid making up anything.\nIf the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \\\"What are some good restaurants around me?\\\" => \\\"Where are you?\\\" or \\\"When is the next flight to Tokyo\\\" => \\\"Where do you travel from?\\\").\nYou are always very attentive to dates, in particular you try to resolve dates and when asked about information at specific dates, you discard information that is at another date.\nYou follow these instructions in all languages, and always respond to the user in the language they use or request.\nNext sections describe the capabilities that you have.\n\n# WEB BROWSING INSTRUCTIONS\n\nYou cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat.\n\n# MULTI-MODAL INSTRUCTIONS\n\nYou have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos.\nYou cannot read nor transcribe audio files or videos.\n\n# TOOL CALLING INSTRUCTIONS\n\nYou may have access to tools that you can use to fetch information or perform actions. You must use these tools in the following situations:\n\n1. When the request requires up-to-date information.\n2. When the request requires specific data that you do not have in your knowledge base.\n3. When the request involves actions that you cannot perform without tools.\n\nAlways prioritize using tools to provide the most accurate and helpful response. If tools are not available, inform the user that you cannot perform the requested action at the moment.\" %}\n\n{{- bos_token }}\n\n{%- if messages[0]['role'] == 'system' %}\n {%- if messages[0]['content'] is string %}\n {%- set system_message = messages[0]['content'] %}\n {%- else %}\n {%- set system_message = messages[0]['content'][0]['text'] %}\n {%- endif %}\n {%- set loop_messages = messages[1:] %}\n{%- else %}\n {%- set system_message = default_system_message %}\n {%- set loop_messages = messages %}\n{%- endif %}\n{{- '[SYSTEM_PROMPT]' + system_message + '[/SYSTEM_PROMPT]' }}\n\n{%- for message in loop_messages %}\n {%- if message['role'] == 'user' %}\n {%- if message['content'] is string %}\n {{- '[INST]' + message['content'] + '[/INST]' }}\n {%- else %}\n {{- '[INST]' }}\n {%- for block in message['content'] %}\n {%- if block['type'] == 'text' %}\n {{- block['text'] }}\n {%- elif block['type'] in ['image', 'image_url'] %}\n {{- '[IMG]' }}\n {%- else %}\n {{- raise_exception('Only text and image blocks are supported in message content!') }}\n {%- endif %}\n {%- endfor %}\n {{- '[/INST]' }}\n {%- endif %}\n {%- elif message['role'] == 'system' %}\n {%- if message['content'] is string %}\n {{- '[SYSTEM_PROMPT]' + message['content'] + '[/SYSTEM_PROMPT]' }}\n {%- else %}\n {{- '[SYSTEM_PROMPT]' + message['content'][0]['text'] + '[/SYSTEM_PROMPT]' }}\n {%- endif %}\n {%- elif message['role'] == 'assistant' %}\n {%- if message['content'] is string %}\n {{- message['content'] + eos_token }}\n {%- else %}\n {{- message['content'][0]['text'] + eos_token }}\n {%- endif %}\n {%- else %}\n {{- raise_exception('Only user, system and assistant roles are supported!') }}\n {%- endif %}\n{%- endfor %}" dataset_prepared_path: ./last_run_prepared datasets: - message_property_mappings: content: content role: role path: bethrezen/grandmaster2-gemini2.5-mixed-81k trust_remote_code: false field_messages: conversation type: chat_template - message_property_mappings: content: content role: role path: ZeroAgency/ru-big-russian-dataset-v1.1 trust_remote_code: false field_messages: conversation type: chat_template - message_property_mappings: content: content role: role path: ZeroAgency/hybrid_reasoning_dataset_ru-no-nebo-with-system-prompt trust_remote_code: false field_messages: conversation type: chat_template - message_property_mappings: content: content role: role path: bethrezen/shkolkovo-2 trust_remote_code: false field_messages: messages type: chat_template - message_property_mappings: content: content role: role path: bethrezen/mera-2 trust_remote_code: false field_messages: conversation type: chat_template split: train - message_property_mappings: content: content role: role path: bethrezen/mera-2 trust_remote_code: false field_messages: conversation type: chat_template split: test test_datasets: - message_property_mappings: content: content role: role path: ZeroAgency/ru-big-russian-dataset trust_remote_code: false field_messages: conversation type: chat_template split: test - message_property_mappings: content: content role: role path: bethrezen/mera-2 trust_remote_code: false field_messages: conversation type: chat_template split: test # exact duplicates are already cleaned #dataset_exact_deduplication: true gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false #learning_rate: 0.0001 learning_rate: 2e-5 #lisa_layers_attribute: model.layers #is_mistral_derived_model: true plugins: - axolotl.integrations.liger.LigerPlugin liger_rope: true liger_rms_norm: true liger_glu_activation: true liger_layer_norm: true liger_fused_linear_cross_entropy: true #load_best_model_at_end: true load_in_4bit: false load_in_8bit: false # adapter: lora # lora_alpha: 256 # lora_dropout: 0 # lora_target_linear: true # lora_r: 256 lr_scheduler: cosine #max_prompt_len: 8192 mean_resizing_embeddings: false micro_batch_size: 4 num_epochs: 2 optimizer: adamw_torch_fused output_dir: ./outputs/zero-mistral-beta50 sample_packing_bin_size: 400 sample_packing_group_size: 100000 save_only_model: true save_safetensors: true #sequence_len: 16392 sequence_len: 8192 min_sample_len: 1 shuffle_merged_datasets: true skip_prepare_dataset: false strict: false train_on_inputs: false weight_decay: 0.01 wandb_project: Zero-Mistral-3.2 wandb_name: Zero-Mistral-Small-3.2-beta50 bf16: true fp16: false tf32: false flash_attention: true save_strategy: epoch eval_strategry: epoch logging_steps: 1 save_total_limit: 5 warmup_steps: 0 sample_packing: true pad_to_sequence_len: true multipack_real_batches: true curriculum_sampling: true sample_packing_sequentially: true group_by_length: true seed: 42 data_seed: 42 max_shard_size: 5GB #deepspeed: /workspace/axolotl/deepspeed_configs/zero1_torch_compile.json #torch_compile: auto log_with: wandb trust_remote_code: true use_fast_tokenizer: true special_tokens: pad_token: "" # qat: # activation_dtype: int8 # weight_dtype: int8 # group_size: 32 # quantization: # weight_dtype: "int8" # activation_dtype: "int8" # group_size: 32 ```

# outputs/zero-mistral-beta50 This model is a fine-tuned version of [ZeroAgency/Mistral-Small-3.2-24B-Instruct-2506-Text-Only](https://huggingface.co/ZeroAgency/Mistral-Small-3.2-24B-Instruct-2506-Text-Only) on the bethrezen/grandmaster2-gemini2.5-mixed-81k, the ZeroAgency/ru-big-russian-dataset-v1.1, the ZeroAgency/hybrid_reasoning_dataset_ru-no-nebo-with-system-prompt, the bethrezen/shkolkovo-2, the bethrezen/mera-2 and the bethrezen/mera-2 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 17630 ### Training results ### Framework versions - Transformers 4.52.3 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1