lhui2010 commited on
Commit
30076c7
·
verified ·
1 Parent(s): 7faa779

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +111 -0
  2. checkpoint-129855/config.json +48 -0
  3. checkpoint-129855/configuration_mamba.py +157 -0
  4. checkpoint-129855/model.safetensors +3 -0
  5. checkpoint-129855/optimizer.pt +3 -0
  6. checkpoint-129855/rng_state.pth +3 -0
  7. checkpoint-129855/scheduler.pt +3 -0
  8. checkpoint-129855/special_tokens_map.json +30 -0
  9. checkpoint-129855/tokenizer.json +0 -0
  10. checkpoint-129855/tokenizer_config.json +37 -0
  11. checkpoint-129855/trainer_state.json +1885 -0
  12. checkpoint-129855/training_args.bin +3 -0
  13. checkpoint-173140/config.json +48 -0
  14. checkpoint-173140/configuration_mamba.py +157 -0
  15. checkpoint-173140/model.safetensors +3 -0
  16. checkpoint-173140/optimizer.pt +3 -0
  17. checkpoint-173140/rng_state.pth +3 -0
  18. checkpoint-173140/scheduler.pt +3 -0
  19. checkpoint-173140/special_tokens_map.json +30 -0
  20. checkpoint-173140/tokenizer.json +0 -0
  21. checkpoint-173140/tokenizer_config.json +37 -0
  22. checkpoint-173140/trainer_state.json +2507 -0
  23. checkpoint-173140/training_args.bin +3 -0
  24. checkpoint-216425/config.json +48 -0
  25. checkpoint-216425/configuration_mamba.py +157 -0
  26. checkpoint-216425/model.safetensors +3 -0
  27. checkpoint-216425/optimizer.pt +3 -0
  28. checkpoint-216425/rng_state.pth +3 -0
  29. checkpoint-216425/scheduler.pt +3 -0
  30. checkpoint-216425/special_tokens_map.json +30 -0
  31. checkpoint-216425/tokenizer.json +0 -0
  32. checkpoint-216425/tokenizer_config.json +37 -0
  33. checkpoint-216425/trainer_state.json +3122 -0
  34. checkpoint-216425/training_args.bin +3 -0
  35. checkpoint-43285/config.json +48 -0
  36. checkpoint-43285/configuration_mamba.py +157 -0
  37. checkpoint-43285/model.safetensors +3 -0
  38. checkpoint-43285/optimizer.pt +3 -0
  39. checkpoint-43285/rng_state.pth +3 -0
  40. checkpoint-43285/scheduler.pt +3 -0
  41. checkpoint-43285/special_tokens_map.json +30 -0
  42. checkpoint-43285/tokenizer.json +0 -0
  43. checkpoint-43285/tokenizer_config.json +37 -0
  44. checkpoint-43285/trainer_state.json +648 -0
  45. checkpoint-43285/training_args.bin +3 -0
  46. checkpoint-86570/config.json +48 -0
  47. checkpoint-86570/configuration_mamba.py +157 -0
  48. checkpoint-86570/model.safetensors +3 -0
  49. checkpoint-86570/optimizer.pt +3 -0
  50. checkpoint-86570/rng_state.pth +3 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Nodule-AI—A deep learning model for Nodule-Specific Gene Identification
2
+
3
+ ## Model Description
4
+
5
+ This model is a fine-tuned version of the [zhangtaolab/plant-dnamamba-BPE](https://huggingface.co/zhangtaolab/plant-dnamamba-BPE) architecture specialized for identifying nodule-specific genes based on promoter DNA sequences. The base model was pretrained on plant genomic sequences using a Mamba-based architecture with Byte Pair Encoding (BPE), which we've adapted for promoter analysis through targeted fine-tuning.
6
+
7
+ **Key Features**:
8
+ - 7999 vocabulary size with specialized DNA tokenization
9
+ - Optimized for promoter sequence analysis (typical input:3000 bp upstream of TSS)
10
+ - Mamba architecture enabling efficient long-sequence processing
11
+ - Classification head for nodule-specific gene identification
12
+
13
+ ## Intended Use
14
+
15
+ This model is designed for **plant genomics researchers** studying root nodule symbiosis mechanisms. Specifically, it predicts whether a given promoter sequence regulates genes specifically induced in root nodules.
16
+
17
+ Example applications:
18
+ - Annotating novel plant genomes for nodule-related functions
19
+ - Identifying regulatory motifs in nodule-specific promoters
20
+ - Comparative analysis of promoter architectures across nodulating species
21
+
22
+ ## Training Data
23
+
24
+ The model was fine-tuned on a large dataset of plant promoter sequences with nodule-induced expression patterns revealed through RNA-Seq:
25
+
26
+ | Data Category | Samples | Species Included |
27
+ |---------------|---------|------------------|
28
+ | Nodule-specific promoters | 175,365 | *Aeschynomene evenia*, *Alnus trabeculosa*, *Arachis hypogaea*, *Chamaecrista pumila*, *Coriaria nepalensis*, *Datisca glomerata*, *Elaeagnus umbellata*, *Glycine max*, *Hippophae rhamnoides*, *Lotus japonicus*, *Medicago truncatula*, *Mimosa pudica*, *Parasponia andersonii*, *Phaseolus vulgaris* |
29
+ | Non-nodule promoters | 170,912 | Matching species background sets |
30
+
31
+ **Sequence characteristics**:
32
+ - 3000 bp upstream of transcription start site (TSS)
33
+ - Balanced positive/negative representation
34
+ - Large scale collection of nodulating species
35
+
36
+ ## Training Procedure
37
+
38
+ **Fine-tuning Parameters**:
39
+ - **Epochs**: 5
40
+ - **Batch size**: 8
41
+ - **Learning rate**: 1e-5
42
+ - **Hardware**: 1 × Tesla V100 32GB GPU
43
+
44
+ ## Evaluation
45
+
46
+ Performance on evaluation set (n=43285 sequences):
47
+
48
+ | Metric | Value |
49
+ |--------|-------|
50
+ | Accuracy | 0.90 |
51
+ | F1 Score | 0.90 |
52
+ | Precision | 0.85 |
53
+ | Recall | 0.96 |
54
+ | Matthews correlation | 0.80 |
55
+
56
+ ## How to Use
57
+
58
+ NVIDIA GPU is required
59
+
60
+ ### Installation
61
+ ```bash
62
+ conda create -n llms python=3.11
63
+ conda activate llms
64
+ pip install 'torch<2.4' 'mambapy<=1.2.0' 'transformers<4.46' 'causal-conv1d<=1.3' 'mamba-ssm<2'
65
+ ```
66
+
67
+ It may take ~15 min for a fresh install
68
+
69
+ ### Basic Inference
70
+ ```python
71
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
72
+ import torch
73
+
74
+ # Load model and tokenizer
75
+ model_name = "lhui2010/nodule-AI"
76
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
77
+ model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True)
78
+
79
+ # Prepare input (3000bp promoter sequence)
80
+ promoter_sequence = "ATGCGTCTCA"*300 # your promoter here
81
+
82
+ # Tokenize and predict
83
+ inputs = tokenizer(
84
+ promoter_sequence,
85
+ return_tensors="pt",
86
+ max_length=3000,
87
+ truncation=True,
88
+ padding="max_length"
89
+ )
90
+
91
+ with torch.no_grad():
92
+ outputs = model(**inputs)
93
+ probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
94
+ nodule_prob = probs[0][1].item()
95
+
96
+ print(f"Probability of nodule-specific regulation: {nodule_prob:.4f}")
97
+ ```
98
+
99
+ The output should be like
100
+
101
+ ```
102
+ Probability of nodule-specific regulation: 0.0021
103
+ ```
104
+
105
+ ### Calculate Shapley scores
106
+
107
+ ## Citation
108
+
109
+ ---
110
+
111
+ *Model card last updated: July 12, 2025*
checkpoint-129855/config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/plant-dnamamba-BPE",
3
+ "architectures": [
4
+ "MambaForSequenceClassification"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_mamba.MambaConfig",
8
+ "AutoModel": "modeling_mamba.MambaModel",
9
+ "AutoModelForCausalLM": "modeling_mamba.MambaForCausalLM",
10
+ "AutoModelForSequenceClassification": "modeling_mamba.MambaForSequenceClassification"
11
+ },
12
+ "bos_token_id": 0,
13
+ "conv_kernel": 4,
14
+ "d_inner": 1536,
15
+ "d_model": 768,
16
+ "eos_token_id": 0,
17
+ "expand": 2,
18
+ "fused_add_norm": true,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 768,
21
+ "initializer_range": 0.1,
22
+ "intermediate_size": 1536,
23
+ "layer_norm_epsilon": 1e-05,
24
+ "model_type": "mamba",
25
+ "n_layer": 24,
26
+ "num_hidden_layers": 24,
27
+ "pad_token_id": 0,
28
+ "pad_vocab_size_multiple": 8,
29
+ "problem_type": "single_label_classification",
30
+ "rescale_prenorm_residual": false,
31
+ "residual_in_fp32": true,
32
+ "rms_norm": true,
33
+ "ssm_cfg": {},
34
+ "state_size": 16,
35
+ "time_step_floor": 0.0001,
36
+ "time_step_init_scheme": "random",
37
+ "time_step_max": 0.1,
38
+ "time_step_min": 0.001,
39
+ "time_step_rank": 48,
40
+ "time_step_scale": 1.0,
41
+ "torch_dtype": "float32",
42
+ "transformers_version": "4.45.2",
43
+ "use_bias": false,
44
+ "use_cache": false,
45
+ "use_conv_bias": true,
46
+ "use_mambapy": true,
47
+ "vocab_size": 8000
48
+ }
checkpoint-129855/configuration_mamba.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MAMBA configuration"""
16
+
17
+ import math
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+
26
+ class MambaConfig(PretrainedConfig):
27
+ """
28
+ This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA
29
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
30
+ defaults will yield a similar configuration to that of the MAMBA
31
+ [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture.
32
+
33
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
34
+ documentation from [`PretrainedConfig`] for more information.
35
+
36
+
37
+ Args:
38
+ vocab_size (`int`, *optional*, defaults to 50280):
39
+ Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the
40
+ `inputs_ids` passed when calling [`MambaModel`].
41
+ hidden_size (`int`, *optional*, defaults to 768):
42
+ Dimensionality of the embeddings and hidden states.
43
+ state_size (`int`, *optional*, defaults to 16): shape of the state space latents.
44
+ num_hidden_layers (`int`, *optional*, defaults to 32):
45
+ Number of hidden layers in the model.
46
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
47
+ The epsilon to use in the layer normalization layers.
48
+ pad_token_id (`int`, *optional*, defaults to 0):
49
+ Padding token id.
50
+ bos_token_id (`int`, *optional*, defaults to 0):
51
+ The id of the beginning of sentence token in the vocabulary.
52
+ eos_token_id (`int`, *optional*, defaults to 0):
53
+ The id of the end of sentence token in the vocabulary.
54
+ expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
55
+ conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
56
+ use_bias (`bool`, *optional*, defaults to `False`):
57
+ Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
58
+ use_conv_bias (`bool`, *optional*, defaults to `True`):
59
+ Whether or not to use bias in the convolution layer of the mixer block.
60
+ hidden_act (`str`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ initializer_range (`float`, *optional*, defaults to 0.1):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ residual_in_fp32 (`bool`, *optional*, defaults to `True`):
65
+ Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
66
+ time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
67
+ Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
68
+ time_step_scale (`float`, *optional*, defaults to 1.0):
69
+ Scale used used to scale `dt_proj.bias`.
70
+ time_step_min (`float`, *optional*, defaults to 0.001):
71
+ Minimum `time_step` used to bound `dt_proj.bias`.
72
+ time_step_max (`float`, *optional*, defaults to 0.1):
73
+ Maximum `time_step` used to bound `dt_proj.bias`.
74
+ time_step_init_scheme (`float`, *optional*, defaults to `"random"`):
75
+ Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`
76
+ time_step_floor (`float`, *optional*, defaults to 0.0001):
77
+ Minimum clamping value of the `dt_proj.bias` layer initialization.
78
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `False`):
79
+ Whether or not to rescale `out_proj` weights when initializing.
80
+ use_cache (`bool`, *optional*, defaults to `True`):
81
+ Whether or not the cache should be used.
82
+ use_mambapy (`bool`, *optional*, defaults to `False`):
83
+ Determines the fallback strategy during training if the CUDA-based official implementation of Mamba is not avaiable. If `True`, the mamba.py implementation is used. If `False`, the naive and slower implementation is used. Consider switching to the naive version if memory is limited.
84
+
85
+
86
+ Example:
87
+
88
+ ```python
89
+ >>> from transformers import MambaConfig, MambaModel
90
+
91
+ >>> # Initializing a Mamba configuration
92
+ >>> configuration = MambaConfig()
93
+
94
+ >>> # Initializing a model (with random weights) from the configuration
95
+ >>> model = MambaModel(configuration)
96
+
97
+ >>> # Accessing the model configuration
98
+ >>> configuration = model.config
99
+ ```"""
100
+
101
+ model_type = "mamba"
102
+
103
+ def __init__(
104
+ self,
105
+ vocab_size=50280,
106
+ hidden_size=768,
107
+ state_size=16,
108
+ num_hidden_layers=32,
109
+ layer_norm_epsilon=1e-5,
110
+ pad_token_id=0,
111
+ bos_token_id=0,
112
+ eos_token_id=0,
113
+ expand=2,
114
+ conv_kernel=4,
115
+ use_bias=False,
116
+ use_conv_bias=True,
117
+ hidden_act="silu",
118
+ initializer_range=0.1,
119
+ residual_in_fp32=True,
120
+ time_step_rank="auto",
121
+ time_step_scale=1.0,
122
+ time_step_min=0.001,
123
+ time_step_max=0.1,
124
+ time_step_init_scheme="random",
125
+ time_step_floor=1e-4,
126
+ rescale_prenorm_residual=False,
127
+ use_cache=True,
128
+ use_mambapy=False,
129
+ **kwargs,
130
+ ):
131
+ self.vocab_size = vocab_size
132
+ self.hidden_size = hidden_size
133
+ self.state_size = state_size
134
+ self.num_hidden_layers = num_hidden_layers
135
+ self.layer_norm_epsilon = layer_norm_epsilon
136
+ self.conv_kernel = conv_kernel
137
+ self.expand = expand
138
+ self.intermediate_size = int(expand * self.hidden_size)
139
+ self.bos_token_id = bos_token_id
140
+ self.eos_token_id = eos_token_id
141
+ self.pad_token_id = pad_token_id
142
+ self.use_bias = use_bias
143
+ self.use_conv_bias = use_conv_bias
144
+ self.hidden_act = hidden_act
145
+ self.initializer_range = initializer_range
146
+ self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
147
+ self.time_step_scale = time_step_scale
148
+ self.time_step_min = time_step_min
149
+ self.time_step_max = time_step_max
150
+ self.time_step_init_scheme = time_step_init_scheme
151
+ self.time_step_floor = time_step_floor
152
+ self.rescale_prenorm_residual = rescale_prenorm_residual
153
+ self.residual_in_fp32 = residual_in_fp32
154
+ self.use_cache = use_cache
155
+ self.use_mambapy = use_mambapy
156
+
157
+ super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, pad_token_id=pad_token_id, **kwargs)
checkpoint-129855/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e184db83fe4f2778c9125a7d4c2560bd168be173e09abe39c629f8f05226c9d
3
+ size 386689624
checkpoint-129855/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1336d954b37d0ba27fc0ca9a56f166c3d1ce88a5f82258105e4a606a9af16037
3
+ size 773531386
checkpoint-129855/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d91aa3803f3b4077eaff73cee8db42f178b8f0cb5ce72bc7c8dd383d1f14c91e
3
+ size 14244
checkpoint-129855/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:393bfa5a51c4bba0c4e8aa37f685a416e645dab596c13ce9b6abe300ba29338c
3
+ size 1064
checkpoint-129855/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
checkpoint-129855/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-129855/tokenizer_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": false,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<|padding|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ }
22
+ },
23
+ "bos_token": "<|endoftext|>",
24
+ "clean_up_tokenization_spaces": true,
25
+ "eos_token": "<|endoftext|>",
26
+ "max_length": 1024,
27
+ "model_max_length": 512,
28
+ "pad_to_multiple_of": null,
29
+ "pad_token": "<|endoftext|>",
30
+ "pad_token_type_id": 0,
31
+ "padding_side": "right",
32
+ "stride": 0,
33
+ "tokenizer_class": "GPTNeoXTokenizer",
34
+ "truncation_side": "right",
35
+ "truncation_strategy": "longest_first",
36
+ "unk_token": "<|endoftext|>"
37
+ }
checkpoint-129855/trainer_state.json ADDED
@@ -0,0 +1,1885 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.8962368802798873,
3
+ "best_model_checkpoint": "finetune/models/plant-dnamamba-BPE-NoduleExpDup/checkpoint-129855",
4
+ "epoch": 3.0,
5
+ "eval_steps": 500,
6
+ "global_step": 129855,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.011551345731777752,
13
+ "grad_norm": 29.45172119140625,
14
+ "learning_rate": 4.620218074293107e-07,
15
+ "loss": 1.0687,
16
+ "step": 500
17
+ },
18
+ {
19
+ "epoch": 0.023102691463555505,
20
+ "grad_norm": 31.642974853515625,
21
+ "learning_rate": 9.240436148586214e-07,
22
+ "loss": 0.9807,
23
+ "step": 1000
24
+ },
25
+ {
26
+ "epoch": 0.034654037195333255,
27
+ "grad_norm": 12.195789337158203,
28
+ "learning_rate": 1.3860654222879322e-06,
29
+ "loss": 0.8373,
30
+ "step": 1500
31
+ },
32
+ {
33
+ "epoch": 0.04620538292711101,
34
+ "grad_norm": 13.536879539489746,
35
+ "learning_rate": 1.8480872297172428e-06,
36
+ "loss": 0.7818,
37
+ "step": 2000
38
+ },
39
+ {
40
+ "epoch": 0.05775672865888876,
41
+ "grad_norm": 13.85306453704834,
42
+ "learning_rate": 2.3101090371465534e-06,
43
+ "loss": 0.7234,
44
+ "step": 2500
45
+ },
46
+ {
47
+ "epoch": 0.06930807439066651,
48
+ "grad_norm": 11.797886848449707,
49
+ "learning_rate": 2.7721308445758644e-06,
50
+ "loss": 0.6925,
51
+ "step": 3000
52
+ },
53
+ {
54
+ "epoch": 0.08085942012244426,
55
+ "grad_norm": 6.770650386810303,
56
+ "learning_rate": 3.2341526520051746e-06,
57
+ "loss": 0.6751,
58
+ "step": 3500
59
+ },
60
+ {
61
+ "epoch": 0.09241076585422202,
62
+ "grad_norm": 10.289820671081543,
63
+ "learning_rate": 3.6961744594344856e-06,
64
+ "loss": 0.6636,
65
+ "step": 4000
66
+ },
67
+ {
68
+ "epoch": 0.10396211158599977,
69
+ "grad_norm": 15.198243141174316,
70
+ "learning_rate": 4.158196266863797e-06,
71
+ "loss": 0.6563,
72
+ "step": 4500
73
+ },
74
+ {
75
+ "epoch": 0.11551345731777753,
76
+ "grad_norm": 13.971009254455566,
77
+ "learning_rate": 4.620218074293107e-06,
78
+ "loss": 0.6481,
79
+ "step": 5000
80
+ },
81
+ {
82
+ "epoch": 0.12706480304955528,
83
+ "grad_norm": 8.441276550292969,
84
+ "learning_rate": 5.082239881722417e-06,
85
+ "loss": 0.6549,
86
+ "step": 5500
87
+ },
88
+ {
89
+ "epoch": 0.13861614878133302,
90
+ "grad_norm": 14.8948335647583,
91
+ "learning_rate": 5.544261689151729e-06,
92
+ "loss": 0.6507,
93
+ "step": 6000
94
+ },
95
+ {
96
+ "epoch": 0.1501674945131108,
97
+ "grad_norm": 7.725317478179932,
98
+ "learning_rate": 6.006283496581039e-06,
99
+ "loss": 0.6544,
100
+ "step": 6500
101
+ },
102
+ {
103
+ "epoch": 0.16171884024488853,
104
+ "grad_norm": 17.12333106994629,
105
+ "learning_rate": 6.468305304010349e-06,
106
+ "loss": 0.6476,
107
+ "step": 7000
108
+ },
109
+ {
110
+ "epoch": 0.17327018597666627,
111
+ "grad_norm": 7.033656120300293,
112
+ "learning_rate": 6.930327111439661e-06,
113
+ "loss": 0.6496,
114
+ "step": 7500
115
+ },
116
+ {
117
+ "epoch": 0.18482153170844404,
118
+ "grad_norm": 7.053399085998535,
119
+ "learning_rate": 7.392348918868971e-06,
120
+ "loss": 0.6523,
121
+ "step": 8000
122
+ },
123
+ {
124
+ "epoch": 0.19637287744022178,
125
+ "grad_norm": 6.655728340148926,
126
+ "learning_rate": 7.854370726298282e-06,
127
+ "loss": 0.6455,
128
+ "step": 8500
129
+ },
130
+ {
131
+ "epoch": 0.20792422317199954,
132
+ "grad_norm": 8.08369255065918,
133
+ "learning_rate": 8.316392533727593e-06,
134
+ "loss": 0.6415,
135
+ "step": 9000
136
+ },
137
+ {
138
+ "epoch": 0.21947556890377729,
139
+ "grad_norm": 3.947148084640503,
140
+ "learning_rate": 8.778414341156903e-06,
141
+ "loss": 0.6514,
142
+ "step": 9500
143
+ },
144
+ {
145
+ "epoch": 0.23102691463555505,
146
+ "grad_norm": 8.264023780822754,
147
+ "learning_rate": 9.240436148586214e-06,
148
+ "loss": 0.6431,
149
+ "step": 10000
150
+ },
151
+ {
152
+ "epoch": 0.2425782603673328,
153
+ "grad_norm": 4.74052095413208,
154
+ "learning_rate": 9.702457956015525e-06,
155
+ "loss": 0.6331,
156
+ "step": 10500
157
+ },
158
+ {
159
+ "epoch": 0.25412960609911056,
160
+ "grad_norm": 12.625064849853516,
161
+ "learning_rate": 9.991342538776186e-06,
162
+ "loss": 0.6449,
163
+ "step": 11000
164
+ },
165
+ {
166
+ "epoch": 0.2656809518308883,
167
+ "grad_norm": 17.814157485961914,
168
+ "learning_rate": 9.967023827473336e-06,
169
+ "loss": 0.6414,
170
+ "step": 11500
171
+ },
172
+ {
173
+ "epoch": 0.27723229756266604,
174
+ "grad_norm": 15.34868049621582,
175
+ "learning_rate": 9.942705116170485e-06,
176
+ "loss": 0.6287,
177
+ "step": 12000
178
+ },
179
+ {
180
+ "epoch": 0.2887836432944438,
181
+ "grad_norm": 5.785658836364746,
182
+ "learning_rate": 9.918386404867634e-06,
183
+ "loss": 0.6417,
184
+ "step": 12500
185
+ },
186
+ {
187
+ "epoch": 0.3003349890262216,
188
+ "grad_norm": 3.7156598567962646,
189
+ "learning_rate": 9.894067693564784e-06,
190
+ "loss": 0.6436,
191
+ "step": 13000
192
+ },
193
+ {
194
+ "epoch": 0.3118863347579993,
195
+ "grad_norm": 12.272802352905273,
196
+ "learning_rate": 9.869748982261933e-06,
197
+ "loss": 0.6505,
198
+ "step": 13500
199
+ },
200
+ {
201
+ "epoch": 0.32343768048977706,
202
+ "grad_norm": 15.592353820800781,
203
+ "learning_rate": 9.845430270959082e-06,
204
+ "loss": 0.6313,
205
+ "step": 14000
206
+ },
207
+ {
208
+ "epoch": 0.3349890262215548,
209
+ "grad_norm": 4.684946537017822,
210
+ "learning_rate": 9.821111559656232e-06,
211
+ "loss": 0.6457,
212
+ "step": 14500
213
+ },
214
+ {
215
+ "epoch": 0.34654037195333254,
216
+ "grad_norm": 6.0165300369262695,
217
+ "learning_rate": 9.79679284835338e-06,
218
+ "loss": 0.641,
219
+ "step": 15000
220
+ },
221
+ {
222
+ "epoch": 0.35809171768511033,
223
+ "grad_norm": 10.924590110778809,
224
+ "learning_rate": 9.772474137050531e-06,
225
+ "loss": 0.6293,
226
+ "step": 15500
227
+ },
228
+ {
229
+ "epoch": 0.3696430634168881,
230
+ "grad_norm": 7.296951770782471,
231
+ "learning_rate": 9.74815542574768e-06,
232
+ "loss": 0.627,
233
+ "step": 16000
234
+ },
235
+ {
236
+ "epoch": 0.3811944091486658,
237
+ "grad_norm": 14.750964164733887,
238
+ "learning_rate": 9.723836714444829e-06,
239
+ "loss": 0.6254,
240
+ "step": 16500
241
+ },
242
+ {
243
+ "epoch": 0.39274575488044355,
244
+ "grad_norm": 7.377384662628174,
245
+ "learning_rate": 9.699518003141977e-06,
246
+ "loss": 0.6287,
247
+ "step": 17000
248
+ },
249
+ {
250
+ "epoch": 0.40429710061222135,
251
+ "grad_norm": 6.540673732757568,
252
+ "learning_rate": 9.675199291839128e-06,
253
+ "loss": 0.6309,
254
+ "step": 17500
255
+ },
256
+ {
257
+ "epoch": 0.4158484463439991,
258
+ "grad_norm": 6.4445295333862305,
259
+ "learning_rate": 9.650880580536277e-06,
260
+ "loss": 0.6302,
261
+ "step": 18000
262
+ },
263
+ {
264
+ "epoch": 0.42739979207577683,
265
+ "grad_norm": 7.85426664352417,
266
+ "learning_rate": 9.626561869233427e-06,
267
+ "loss": 0.6201,
268
+ "step": 18500
269
+ },
270
+ {
271
+ "epoch": 0.43895113780755457,
272
+ "grad_norm": 5.960766792297363,
273
+ "learning_rate": 9.602243157930576e-06,
274
+ "loss": 0.626,
275
+ "step": 19000
276
+ },
277
+ {
278
+ "epoch": 0.4505024835393323,
279
+ "grad_norm": 9.376029968261719,
280
+ "learning_rate": 9.577924446627726e-06,
281
+ "loss": 0.6164,
282
+ "step": 19500
283
+ },
284
+ {
285
+ "epoch": 0.4620538292711101,
286
+ "grad_norm": 17.05916976928711,
287
+ "learning_rate": 9.553605735324875e-06,
288
+ "loss": 0.6232,
289
+ "step": 20000
290
+ },
291
+ {
292
+ "epoch": 0.47360517500288785,
293
+ "grad_norm": 7.523278713226318,
294
+ "learning_rate": 9.529287024022024e-06,
295
+ "loss": 0.601,
296
+ "step": 20500
297
+ },
298
+ {
299
+ "epoch": 0.4851565207346656,
300
+ "grad_norm": 6.75208044052124,
301
+ "learning_rate": 9.504968312719172e-06,
302
+ "loss": 0.6157,
303
+ "step": 21000
304
+ },
305
+ {
306
+ "epoch": 0.4967078664664433,
307
+ "grad_norm": 12.919069290161133,
308
+ "learning_rate": 9.480649601416323e-06,
309
+ "loss": 0.6098,
310
+ "step": 21500
311
+ },
312
+ {
313
+ "epoch": 0.5082592121982211,
314
+ "grad_norm": 19.89571189880371,
315
+ "learning_rate": 9.456330890113472e-06,
316
+ "loss": 0.6131,
317
+ "step": 22000
318
+ },
319
+ {
320
+ "epoch": 0.5198105579299989,
321
+ "grad_norm": 7.436350345611572,
322
+ "learning_rate": 9.432012178810622e-06,
323
+ "loss": 0.6121,
324
+ "step": 22500
325
+ },
326
+ {
327
+ "epoch": 0.5313619036617766,
328
+ "grad_norm": 10.576221466064453,
329
+ "learning_rate": 9.40769346750777e-06,
330
+ "loss": 0.6005,
331
+ "step": 23000
332
+ },
333
+ {
334
+ "epoch": 0.5429132493935543,
335
+ "grad_norm": 12.731452941894531,
336
+ "learning_rate": 9.383374756204921e-06,
337
+ "loss": 0.6217,
338
+ "step": 23500
339
+ },
340
+ {
341
+ "epoch": 0.5544645951253321,
342
+ "grad_norm": 9.219985961914062,
343
+ "learning_rate": 9.35905604490207e-06,
344
+ "loss": 0.5985,
345
+ "step": 24000
346
+ },
347
+ {
348
+ "epoch": 0.5660159408571098,
349
+ "grad_norm": 12.659988403320312,
350
+ "learning_rate": 9.334737333599219e-06,
351
+ "loss": 0.5951,
352
+ "step": 24500
353
+ },
354
+ {
355
+ "epoch": 0.5775672865888876,
356
+ "grad_norm": 32.879234313964844,
357
+ "learning_rate": 9.310418622296367e-06,
358
+ "loss": 0.5961,
359
+ "step": 25000
360
+ },
361
+ {
362
+ "epoch": 0.5891186323206653,
363
+ "grad_norm": 9.763121604919434,
364
+ "learning_rate": 9.286099910993518e-06,
365
+ "loss": 0.6081,
366
+ "step": 25500
367
+ },
368
+ {
369
+ "epoch": 0.6006699780524432,
370
+ "grad_norm": 9.057079315185547,
371
+ "learning_rate": 9.261781199690667e-06,
372
+ "loss": 0.5863,
373
+ "step": 26000
374
+ },
375
+ {
376
+ "epoch": 0.6122213237842209,
377
+ "grad_norm": 17.381582260131836,
378
+ "learning_rate": 9.237462488387817e-06,
379
+ "loss": 0.5857,
380
+ "step": 26500
381
+ },
382
+ {
383
+ "epoch": 0.6237726695159986,
384
+ "grad_norm": 18.3979549407959,
385
+ "learning_rate": 9.213143777084966e-06,
386
+ "loss": 0.5853,
387
+ "step": 27000
388
+ },
389
+ {
390
+ "epoch": 0.6353240152477764,
391
+ "grad_norm": 11.13846206665039,
392
+ "learning_rate": 9.188825065782114e-06,
393
+ "loss": 0.582,
394
+ "step": 27500
395
+ },
396
+ {
397
+ "epoch": 0.6468753609795541,
398
+ "grad_norm": 8.506060600280762,
399
+ "learning_rate": 9.164506354479265e-06,
400
+ "loss": 0.5773,
401
+ "step": 28000
402
+ },
403
+ {
404
+ "epoch": 0.6584267067113319,
405
+ "grad_norm": 9.814123153686523,
406
+ "learning_rate": 9.140187643176414e-06,
407
+ "loss": 0.5797,
408
+ "step": 28500
409
+ },
410
+ {
411
+ "epoch": 0.6699780524431096,
412
+ "grad_norm": 11.70206069946289,
413
+ "learning_rate": 9.115868931873562e-06,
414
+ "loss": 0.5587,
415
+ "step": 29000
416
+ },
417
+ {
418
+ "epoch": 0.6815293981748873,
419
+ "grad_norm": 11.949383735656738,
420
+ "learning_rate": 9.091550220570711e-06,
421
+ "loss": 0.5769,
422
+ "step": 29500
423
+ },
424
+ {
425
+ "epoch": 0.6930807439066651,
426
+ "grad_norm": 11.51278018951416,
427
+ "learning_rate": 9.067231509267862e-06,
428
+ "loss": 0.5798,
429
+ "step": 30000
430
+ },
431
+ {
432
+ "epoch": 0.7046320896384429,
433
+ "grad_norm": 13.705997467041016,
434
+ "learning_rate": 9.042912797965012e-06,
435
+ "loss": 0.5585,
436
+ "step": 30500
437
+ },
438
+ {
439
+ "epoch": 0.7161834353702207,
440
+ "grad_norm": 18.50054168701172,
441
+ "learning_rate": 9.01859408666216e-06,
442
+ "loss": 0.5656,
443
+ "step": 31000
444
+ },
445
+ {
446
+ "epoch": 0.7277347811019984,
447
+ "grad_norm": 23.920692443847656,
448
+ "learning_rate": 8.99427537535931e-06,
449
+ "loss": 0.5493,
450
+ "step": 31500
451
+ },
452
+ {
453
+ "epoch": 0.7392861268337761,
454
+ "grad_norm": 15.425186157226562,
455
+ "learning_rate": 8.96995666405646e-06,
456
+ "loss": 0.5538,
457
+ "step": 32000
458
+ },
459
+ {
460
+ "epoch": 0.7508374725655539,
461
+ "grad_norm": 11.077018737792969,
462
+ "learning_rate": 8.945637952753609e-06,
463
+ "loss": 0.5553,
464
+ "step": 32500
465
+ },
466
+ {
467
+ "epoch": 0.7623888182973316,
468
+ "grad_norm": 17.128128051757812,
469
+ "learning_rate": 8.921319241450757e-06,
470
+ "loss": 0.5657,
471
+ "step": 33000
472
+ },
473
+ {
474
+ "epoch": 0.7739401640291094,
475
+ "grad_norm": 37.22261428833008,
476
+ "learning_rate": 8.897000530147906e-06,
477
+ "loss": 0.5573,
478
+ "step": 33500
479
+ },
480
+ {
481
+ "epoch": 0.7854915097608871,
482
+ "grad_norm": 18.649255752563477,
483
+ "learning_rate": 8.872681818845056e-06,
484
+ "loss": 0.5337,
485
+ "step": 34000
486
+ },
487
+ {
488
+ "epoch": 0.7970428554926648,
489
+ "grad_norm": 11.49511432647705,
490
+ "learning_rate": 8.848363107542205e-06,
491
+ "loss": 0.5442,
492
+ "step": 34500
493
+ },
494
+ {
495
+ "epoch": 0.8085942012244427,
496
+ "grad_norm": 20.64067840576172,
497
+ "learning_rate": 8.824044396239356e-06,
498
+ "loss": 0.5632,
499
+ "step": 35000
500
+ },
501
+ {
502
+ "epoch": 0.8201455469562204,
503
+ "grad_norm": 18.51673126220703,
504
+ "learning_rate": 8.799725684936504e-06,
505
+ "loss": 0.5454,
506
+ "step": 35500
507
+ },
508
+ {
509
+ "epoch": 0.8316968926879982,
510
+ "grad_norm": 18.139997482299805,
511
+ "learning_rate": 8.775406973633655e-06,
512
+ "loss": 0.5371,
513
+ "step": 36000
514
+ },
515
+ {
516
+ "epoch": 0.8432482384197759,
517
+ "grad_norm": 21.852693557739258,
518
+ "learning_rate": 8.751088262330804e-06,
519
+ "loss": 0.539,
520
+ "step": 36500
521
+ },
522
+ {
523
+ "epoch": 0.8547995841515537,
524
+ "grad_norm": 31.914161682128906,
525
+ "learning_rate": 8.726769551027952e-06,
526
+ "loss": 0.5383,
527
+ "step": 37000
528
+ },
529
+ {
530
+ "epoch": 0.8663509298833314,
531
+ "grad_norm": 9.24451732635498,
532
+ "learning_rate": 8.702450839725101e-06,
533
+ "loss": 0.548,
534
+ "step": 37500
535
+ },
536
+ {
537
+ "epoch": 0.8779022756151091,
538
+ "grad_norm": 27.3262882232666,
539
+ "learning_rate": 8.678132128422251e-06,
540
+ "loss": 0.524,
541
+ "step": 38000
542
+ },
543
+ {
544
+ "epoch": 0.8894536213468869,
545
+ "grad_norm": 23.231952667236328,
546
+ "learning_rate": 8.6538134171194e-06,
547
+ "loss": 0.5121,
548
+ "step": 38500
549
+ },
550
+ {
551
+ "epoch": 0.9010049670786646,
552
+ "grad_norm": 28.35572052001953,
553
+ "learning_rate": 8.62949470581655e-06,
554
+ "loss": 0.5333,
555
+ "step": 39000
556
+ },
557
+ {
558
+ "epoch": 0.9125563128104425,
559
+ "grad_norm": 24.61405372619629,
560
+ "learning_rate": 8.6051759945137e-06,
561
+ "loss": 0.5015,
562
+ "step": 39500
563
+ },
564
+ {
565
+ "epoch": 0.9241076585422202,
566
+ "grad_norm": 30.62819480895996,
567
+ "learning_rate": 8.580857283210848e-06,
568
+ "loss": 0.5092,
569
+ "step": 40000
570
+ },
571
+ {
572
+ "epoch": 0.935659004273998,
573
+ "grad_norm": 19.295331954956055,
574
+ "learning_rate": 8.556538571907999e-06,
575
+ "loss": 0.5125,
576
+ "step": 40500
577
+ },
578
+ {
579
+ "epoch": 0.9472103500057757,
580
+ "grad_norm": 10.785584449768066,
581
+ "learning_rate": 8.532219860605147e-06,
582
+ "loss": 0.5161,
583
+ "step": 41000
584
+ },
585
+ {
586
+ "epoch": 0.9587616957375534,
587
+ "grad_norm": 24.81365394592285,
588
+ "learning_rate": 8.507901149302296e-06,
589
+ "loss": 0.498,
590
+ "step": 41500
591
+ },
592
+ {
593
+ "epoch": 0.9703130414693312,
594
+ "grad_norm": 48.026973724365234,
595
+ "learning_rate": 8.483582437999446e-06,
596
+ "loss": 0.4916,
597
+ "step": 42000
598
+ },
599
+ {
600
+ "epoch": 0.9818643872011089,
601
+ "grad_norm": 6.653892517089844,
602
+ "learning_rate": 8.459263726696595e-06,
603
+ "loss": 0.4944,
604
+ "step": 42500
605
+ },
606
+ {
607
+ "epoch": 0.9934157329328867,
608
+ "grad_norm": 39.814945220947266,
609
+ "learning_rate": 8.434945015393746e-06,
610
+ "loss": 0.4954,
611
+ "step": 43000
612
+ },
613
+ {
614
+ "epoch": 1.0,
615
+ "eval_accuracy": 0.7796003234376805,
616
+ "eval_f1": 0.7890500619140279,
617
+ "eval_loss": 0.4737609326839447,
618
+ "eval_matthews_correlation": 0.5599099791389662,
619
+ "eval_precision": 0.7656524910955671,
620
+ "eval_recall": 0.8139227225035354,
621
+ "eval_runtime": 550.4755,
622
+ "eval_samples_per_second": 78.632,
623
+ "eval_steps_per_second": 4.916,
624
+ "step": 43285
625
+ },
626
+ {
627
+ "epoch": 1.0049670786646645,
628
+ "grad_norm": 22.5259952545166,
629
+ "learning_rate": 8.410626304090894e-06,
630
+ "loss": 0.4339,
631
+ "step": 43500
632
+ },
633
+ {
634
+ "epoch": 1.0165184243964422,
635
+ "grad_norm": 25.5583438873291,
636
+ "learning_rate": 8.386307592788043e-06,
637
+ "loss": 0.3274,
638
+ "step": 44000
639
+ },
640
+ {
641
+ "epoch": 1.02806977012822,
642
+ "grad_norm": 9.524903297424316,
643
+ "learning_rate": 8.361988881485194e-06,
644
+ "loss": 0.3372,
645
+ "step": 44500
646
+ },
647
+ {
648
+ "epoch": 1.0396211158599977,
649
+ "grad_norm": 37.77333068847656,
650
+ "learning_rate": 8.337670170182342e-06,
651
+ "loss": 0.3471,
652
+ "step": 45000
653
+ },
654
+ {
655
+ "epoch": 1.0511724615917755,
656
+ "grad_norm": 47.2765007019043,
657
+ "learning_rate": 8.313351458879491e-06,
658
+ "loss": 0.3436,
659
+ "step": 45500
660
+ },
661
+ {
662
+ "epoch": 1.0627238073235532,
663
+ "grad_norm": 58.79914474487305,
664
+ "learning_rate": 8.289032747576641e-06,
665
+ "loss": 0.3524,
666
+ "step": 46000
667
+ },
668
+ {
669
+ "epoch": 1.074275153055331,
670
+ "grad_norm": 23.910385131835938,
671
+ "learning_rate": 8.26471403627379e-06,
672
+ "loss": 0.3283,
673
+ "step": 46500
674
+ },
675
+ {
676
+ "epoch": 1.0858264987871087,
677
+ "grad_norm": 65.18335723876953,
678
+ "learning_rate": 8.24039532497094e-06,
679
+ "loss": 0.3529,
680
+ "step": 47000
681
+ },
682
+ {
683
+ "epoch": 1.0973778445188864,
684
+ "grad_norm": 29.333236694335938,
685
+ "learning_rate": 8.21607661366809e-06,
686
+ "loss": 0.3471,
687
+ "step": 47500
688
+ },
689
+ {
690
+ "epoch": 1.1089291902506642,
691
+ "grad_norm": 41.33262252807617,
692
+ "learning_rate": 8.191757902365238e-06,
693
+ "loss": 0.3507,
694
+ "step": 48000
695
+ },
696
+ {
697
+ "epoch": 1.120480535982442,
698
+ "grad_norm": 91.7591781616211,
699
+ "learning_rate": 8.167439191062387e-06,
700
+ "loss": 0.3506,
701
+ "step": 48500
702
+ },
703
+ {
704
+ "epoch": 1.1320318817142196,
705
+ "grad_norm": 7.366957187652588,
706
+ "learning_rate": 8.143120479759537e-06,
707
+ "loss": 0.3526,
708
+ "step": 49000
709
+ },
710
+ {
711
+ "epoch": 1.1435832274459974,
712
+ "grad_norm": 31.159626007080078,
713
+ "learning_rate": 8.118801768456686e-06,
714
+ "loss": 0.3423,
715
+ "step": 49500
716
+ },
717
+ {
718
+ "epoch": 1.1551345731777751,
719
+ "grad_norm": 15.73024845123291,
720
+ "learning_rate": 8.094483057153836e-06,
721
+ "loss": 0.3516,
722
+ "step": 50000
723
+ },
724
+ {
725
+ "epoch": 1.1666859189095529,
726
+ "grad_norm": 66.58766174316406,
727
+ "learning_rate": 8.070164345850985e-06,
728
+ "loss": 0.3435,
729
+ "step": 50500
730
+ },
731
+ {
732
+ "epoch": 1.1782372646413308,
733
+ "grad_norm": 48.182308197021484,
734
+ "learning_rate": 8.045845634548136e-06,
735
+ "loss": 0.344,
736
+ "step": 51000
737
+ },
738
+ {
739
+ "epoch": 1.1897886103731086,
740
+ "grad_norm": 42.61834716796875,
741
+ "learning_rate": 8.021526923245284e-06,
742
+ "loss": 0.3739,
743
+ "step": 51500
744
+ },
745
+ {
746
+ "epoch": 1.2013399561048863,
747
+ "grad_norm": 59.38914489746094,
748
+ "learning_rate": 7.997208211942433e-06,
749
+ "loss": 0.3399,
750
+ "step": 52000
751
+ },
752
+ {
753
+ "epoch": 1.212891301836664,
754
+ "grad_norm": 5.7647857666015625,
755
+ "learning_rate": 7.972889500639582e-06,
756
+ "loss": 0.3518,
757
+ "step": 52500
758
+ },
759
+ {
760
+ "epoch": 1.2244426475684418,
761
+ "grad_norm": 73.71373748779297,
762
+ "learning_rate": 7.948570789336732e-06,
763
+ "loss": 0.3523,
764
+ "step": 53000
765
+ },
766
+ {
767
+ "epoch": 1.2359939933002195,
768
+ "grad_norm": 14.03942584991455,
769
+ "learning_rate": 7.924252078033881e-06,
770
+ "loss": 0.3512,
771
+ "step": 53500
772
+ },
773
+ {
774
+ "epoch": 1.2475453390319973,
775
+ "grad_norm": 53.254791259765625,
776
+ "learning_rate": 7.899933366731031e-06,
777
+ "loss": 0.3454,
778
+ "step": 54000
779
+ },
780
+ {
781
+ "epoch": 1.259096684763775,
782
+ "grad_norm": 4.009112358093262,
783
+ "learning_rate": 7.87561465542818e-06,
784
+ "loss": 0.3406,
785
+ "step": 54500
786
+ },
787
+ {
788
+ "epoch": 1.2706480304955527,
789
+ "grad_norm": 37.73891067504883,
790
+ "learning_rate": 7.85129594412533e-06,
791
+ "loss": 0.3474,
792
+ "step": 55000
793
+ },
794
+ {
795
+ "epoch": 1.2821993762273305,
796
+ "grad_norm": 84.36405181884766,
797
+ "learning_rate": 7.82697723282248e-06,
798
+ "loss": 0.3551,
799
+ "step": 55500
800
+ },
801
+ {
802
+ "epoch": 1.2937507219591082,
803
+ "grad_norm": 43.2830810546875,
804
+ "learning_rate": 7.802658521519628e-06,
805
+ "loss": 0.3521,
806
+ "step": 56000
807
+ },
808
+ {
809
+ "epoch": 1.305302067690886,
810
+ "grad_norm": 143.8572998046875,
811
+ "learning_rate": 7.778339810216777e-06,
812
+ "loss": 0.3544,
813
+ "step": 56500
814
+ },
815
+ {
816
+ "epoch": 1.3168534134226637,
817
+ "grad_norm": 19.368745803833008,
818
+ "learning_rate": 7.754021098913927e-06,
819
+ "loss": 0.332,
820
+ "step": 57000
821
+ },
822
+ {
823
+ "epoch": 1.3284047591544415,
824
+ "grad_norm": 37.57652282714844,
825
+ "learning_rate": 7.729702387611076e-06,
826
+ "loss": 0.3614,
827
+ "step": 57500
828
+ },
829
+ {
830
+ "epoch": 1.3399561048862192,
831
+ "grad_norm": 12.710687637329102,
832
+ "learning_rate": 7.705383676308226e-06,
833
+ "loss": 0.3397,
834
+ "step": 58000
835
+ },
836
+ {
837
+ "epoch": 1.351507450617997,
838
+ "grad_norm": 9.734477043151855,
839
+ "learning_rate": 7.681064965005375e-06,
840
+ "loss": 0.3337,
841
+ "step": 58500
842
+ },
843
+ {
844
+ "epoch": 1.3630587963497747,
845
+ "grad_norm": 50.98896408081055,
846
+ "learning_rate": 7.656746253702524e-06,
847
+ "loss": 0.3466,
848
+ "step": 59000
849
+ },
850
+ {
851
+ "epoch": 1.3746101420815524,
852
+ "grad_norm": 36.90542221069336,
853
+ "learning_rate": 7.632427542399674e-06,
854
+ "loss": 0.3345,
855
+ "step": 59500
856
+ },
857
+ {
858
+ "epoch": 1.3861614878133302,
859
+ "grad_norm": 87.90083312988281,
860
+ "learning_rate": 7.608108831096823e-06,
861
+ "loss": 0.3268,
862
+ "step": 60000
863
+ },
864
+ {
865
+ "epoch": 1.397712833545108,
866
+ "grad_norm": 33.43013000488281,
867
+ "learning_rate": 7.583790119793973e-06,
868
+ "loss": 0.3572,
869
+ "step": 60500
870
+ },
871
+ {
872
+ "epoch": 1.4092641792768856,
873
+ "grad_norm": 10.33069896697998,
874
+ "learning_rate": 7.559471408491121e-06,
875
+ "loss": 0.3565,
876
+ "step": 61000
877
+ },
878
+ {
879
+ "epoch": 1.4208155250086634,
880
+ "grad_norm": 1.686011791229248,
881
+ "learning_rate": 7.535152697188272e-06,
882
+ "loss": 0.3647,
883
+ "step": 61500
884
+ },
885
+ {
886
+ "epoch": 1.4323668707404413,
887
+ "grad_norm": 20.76148223876953,
888
+ "learning_rate": 7.5108339858854206e-06,
889
+ "loss": 0.3284,
890
+ "step": 62000
891
+ },
892
+ {
893
+ "epoch": 1.443918216472219,
894
+ "grad_norm": 9.261433601379395,
895
+ "learning_rate": 7.48651527458257e-06,
896
+ "loss": 0.3474,
897
+ "step": 62500
898
+ },
899
+ {
900
+ "epoch": 1.4554695622039968,
901
+ "grad_norm": 38.97019577026367,
902
+ "learning_rate": 7.462196563279719e-06,
903
+ "loss": 0.3433,
904
+ "step": 63000
905
+ },
906
+ {
907
+ "epoch": 1.4670209079357746,
908
+ "grad_norm": 43.138450622558594,
909
+ "learning_rate": 7.437877851976869e-06,
910
+ "loss": 0.3406,
911
+ "step": 63500
912
+ },
913
+ {
914
+ "epoch": 1.4785722536675523,
915
+ "grad_norm": 10.66066837310791,
916
+ "learning_rate": 7.413559140674018e-06,
917
+ "loss": 0.3533,
918
+ "step": 64000
919
+ },
920
+ {
921
+ "epoch": 1.49012359939933,
922
+ "grad_norm": 32.289588928222656,
923
+ "learning_rate": 7.389240429371168e-06,
924
+ "loss": 0.3377,
925
+ "step": 64500
926
+ },
927
+ {
928
+ "epoch": 1.5016749451311078,
929
+ "grad_norm": 85.28985595703125,
930
+ "learning_rate": 7.364921718068316e-06,
931
+ "loss": 0.3598,
932
+ "step": 65000
933
+ },
934
+ {
935
+ "epoch": 1.5132262908628855,
936
+ "grad_norm": 56.37942123413086,
937
+ "learning_rate": 7.340603006765467e-06,
938
+ "loss": 0.3354,
939
+ "step": 65500
940
+ },
941
+ {
942
+ "epoch": 1.5247776365946633,
943
+ "grad_norm": 91.6646499633789,
944
+ "learning_rate": 7.3162842954626156e-06,
945
+ "loss": 0.3557,
946
+ "step": 66000
947
+ },
948
+ {
949
+ "epoch": 1.536328982326441,
950
+ "grad_norm": 22.282794952392578,
951
+ "learning_rate": 7.291965584159765e-06,
952
+ "loss": 0.3375,
953
+ "step": 66500
954
+ },
955
+ {
956
+ "epoch": 1.5478803280582187,
957
+ "grad_norm": 23.59638214111328,
958
+ "learning_rate": 7.267646872856914e-06,
959
+ "loss": 0.3613,
960
+ "step": 67000
961
+ },
962
+ {
963
+ "epoch": 1.5594316737899967,
964
+ "grad_norm": 101.47071838378906,
965
+ "learning_rate": 7.243328161554064e-06,
966
+ "loss": 0.3625,
967
+ "step": 67500
968
+ },
969
+ {
970
+ "epoch": 1.5709830195217744,
971
+ "grad_norm": 10.680140495300293,
972
+ "learning_rate": 7.219009450251213e-06,
973
+ "loss": 0.3447,
974
+ "step": 68000
975
+ },
976
+ {
977
+ "epoch": 1.5825343652535522,
978
+ "grad_norm": 3.526810646057129,
979
+ "learning_rate": 7.194690738948363e-06,
980
+ "loss": 0.3752,
981
+ "step": 68500
982
+ },
983
+ {
984
+ "epoch": 1.59408571098533,
985
+ "grad_norm": 63.79883575439453,
986
+ "learning_rate": 7.170372027645511e-06,
987
+ "loss": 0.3556,
988
+ "step": 69000
989
+ },
990
+ {
991
+ "epoch": 1.6056370567171077,
992
+ "grad_norm": 57.77899169921875,
993
+ "learning_rate": 7.14605331634266e-06,
994
+ "loss": 0.3734,
995
+ "step": 69500
996
+ },
997
+ {
998
+ "epoch": 1.6171884024488854,
999
+ "grad_norm": 12.811071395874023,
1000
+ "learning_rate": 7.1217346050398105e-06,
1001
+ "loss": 0.3314,
1002
+ "step": 70000
1003
+ },
1004
+ {
1005
+ "epoch": 1.6287397481806631,
1006
+ "grad_norm": 45.64273452758789,
1007
+ "learning_rate": 7.09741589373696e-06,
1008
+ "loss": 0.3603,
1009
+ "step": 70500
1010
+ },
1011
+ {
1012
+ "epoch": 1.6402910939124409,
1013
+ "grad_norm": 0.7724096775054932,
1014
+ "learning_rate": 7.073097182434109e-06,
1015
+ "loss": 0.3531,
1016
+ "step": 71000
1017
+ },
1018
+ {
1019
+ "epoch": 1.6518424396442186,
1020
+ "grad_norm": 111.21202850341797,
1021
+ "learning_rate": 7.048778471131258e-06,
1022
+ "loss": 0.3687,
1023
+ "step": 71500
1024
+ },
1025
+ {
1026
+ "epoch": 1.6633937853759964,
1027
+ "grad_norm": 2.9768288135528564,
1028
+ "learning_rate": 7.024459759828408e-06,
1029
+ "loss": 0.362,
1030
+ "step": 72000
1031
+ },
1032
+ {
1033
+ "epoch": 1.674945131107774,
1034
+ "grad_norm": 27.48809242248535,
1035
+ "learning_rate": 7.000141048525558e-06,
1036
+ "loss": 0.3616,
1037
+ "step": 72500
1038
+ },
1039
+ {
1040
+ "epoch": 1.6864964768395518,
1041
+ "grad_norm": 48.34687805175781,
1042
+ "learning_rate": 6.975822337222706e-06,
1043
+ "loss": 0.3367,
1044
+ "step": 73000
1045
+ },
1046
+ {
1047
+ "epoch": 1.6980478225713296,
1048
+ "grad_norm": 39.69007110595703,
1049
+ "learning_rate": 6.951503625919855e-06,
1050
+ "loss": 0.3621,
1051
+ "step": 73500
1052
+ },
1053
+ {
1054
+ "epoch": 1.7095991683031073,
1055
+ "grad_norm": 110.33565521240234,
1056
+ "learning_rate": 6.9271849146170055e-06,
1057
+ "loss": 0.3539,
1058
+ "step": 74000
1059
+ },
1060
+ {
1061
+ "epoch": 1.721150514034885,
1062
+ "grad_norm": 41.324222564697266,
1063
+ "learning_rate": 6.902866203314154e-06,
1064
+ "loss": 0.3433,
1065
+ "step": 74500
1066
+ },
1067
+ {
1068
+ "epoch": 1.7327018597666628,
1069
+ "grad_norm": 73.07483673095703,
1070
+ "learning_rate": 6.878547492011304e-06,
1071
+ "loss": 0.3581,
1072
+ "step": 75000
1073
+ },
1074
+ {
1075
+ "epoch": 1.7442532054984405,
1076
+ "grad_norm": 2.784154176712036,
1077
+ "learning_rate": 6.854228780708453e-06,
1078
+ "loss": 0.3506,
1079
+ "step": 75500
1080
+ },
1081
+ {
1082
+ "epoch": 1.7558045512302183,
1083
+ "grad_norm": 46.016876220703125,
1084
+ "learning_rate": 6.829910069405603e-06,
1085
+ "loss": 0.3687,
1086
+ "step": 76000
1087
+ },
1088
+ {
1089
+ "epoch": 1.767355896961996,
1090
+ "grad_norm": 10.431674003601074,
1091
+ "learning_rate": 6.805591358102752e-06,
1092
+ "loss": 0.3448,
1093
+ "step": 76500
1094
+ },
1095
+ {
1096
+ "epoch": 1.7789072426937738,
1097
+ "grad_norm": 77.33363342285156,
1098
+ "learning_rate": 6.781272646799901e-06,
1099
+ "loss": 0.3688,
1100
+ "step": 77000
1101
+ },
1102
+ {
1103
+ "epoch": 1.7904585884255515,
1104
+ "grad_norm": 36.162261962890625,
1105
+ "learning_rate": 6.75695393549705e-06,
1106
+ "loss": 0.3481,
1107
+ "step": 77500
1108
+ },
1109
+ {
1110
+ "epoch": 1.8020099341573292,
1111
+ "grad_norm": 34.184844970703125,
1112
+ "learning_rate": 6.7326352241942005e-06,
1113
+ "loss": 0.3538,
1114
+ "step": 78000
1115
+ },
1116
+ {
1117
+ "epoch": 1.813561279889107,
1118
+ "grad_norm": 89.07086944580078,
1119
+ "learning_rate": 6.708316512891349e-06,
1120
+ "loss": 0.3735,
1121
+ "step": 78500
1122
+ },
1123
+ {
1124
+ "epoch": 1.8251126256208847,
1125
+ "grad_norm": 3.9982004165649414,
1126
+ "learning_rate": 6.683997801588499e-06,
1127
+ "loss": 0.3394,
1128
+ "step": 79000
1129
+ },
1130
+ {
1131
+ "epoch": 1.8366639713526625,
1132
+ "grad_norm": 81.374267578125,
1133
+ "learning_rate": 6.659679090285648e-06,
1134
+ "loss": 0.3174,
1135
+ "step": 79500
1136
+ },
1137
+ {
1138
+ "epoch": 1.8482153170844402,
1139
+ "grad_norm": 0.20297826826572418,
1140
+ "learning_rate": 6.635360378982797e-06,
1141
+ "loss": 0.3153,
1142
+ "step": 80000
1143
+ },
1144
+ {
1145
+ "epoch": 1.859766662816218,
1146
+ "grad_norm": 59.6692008972168,
1147
+ "learning_rate": 6.611041667679947e-06,
1148
+ "loss": 0.3545,
1149
+ "step": 80500
1150
+ },
1151
+ {
1152
+ "epoch": 1.8713180085479957,
1153
+ "grad_norm": 1.2928632497787476,
1154
+ "learning_rate": 6.586722956377096e-06,
1155
+ "loss": 0.3373,
1156
+ "step": 81000
1157
+ },
1158
+ {
1159
+ "epoch": 1.8828693542797736,
1160
+ "grad_norm": 94.39313507080078,
1161
+ "learning_rate": 6.562404245074245e-06,
1162
+ "loss": 0.3222,
1163
+ "step": 81500
1164
+ },
1165
+ {
1166
+ "epoch": 1.8944207000115514,
1167
+ "grad_norm": 29.352657318115234,
1168
+ "learning_rate": 6.538085533771395e-06,
1169
+ "loss": 0.3418,
1170
+ "step": 82000
1171
+ },
1172
+ {
1173
+ "epoch": 1.9059720457433291,
1174
+ "grad_norm": 11.170536041259766,
1175
+ "learning_rate": 6.513766822468544e-06,
1176
+ "loss": 0.3423,
1177
+ "step": 82500
1178
+ },
1179
+ {
1180
+ "epoch": 1.9175233914751069,
1181
+ "grad_norm": 90.75904846191406,
1182
+ "learning_rate": 6.489448111165694e-06,
1183
+ "loss": 0.3253,
1184
+ "step": 83000
1185
+ },
1186
+ {
1187
+ "epoch": 1.9290747372068846,
1188
+ "grad_norm": 79.5523681640625,
1189
+ "learning_rate": 6.4651293998628426e-06,
1190
+ "loss": 0.3423,
1191
+ "step": 83500
1192
+ },
1193
+ {
1194
+ "epoch": 1.9406260829386623,
1195
+ "grad_norm": 2.362600564956665,
1196
+ "learning_rate": 6.440810688559992e-06,
1197
+ "loss": 0.3233,
1198
+ "step": 84000
1199
+ },
1200
+ {
1201
+ "epoch": 1.95217742867044,
1202
+ "grad_norm": 106.5050277709961,
1203
+ "learning_rate": 6.416491977257142e-06,
1204
+ "loss": 0.3291,
1205
+ "step": 84500
1206
+ },
1207
+ {
1208
+ "epoch": 1.9637287744022178,
1209
+ "grad_norm": 44.49625015258789,
1210
+ "learning_rate": 6.392173265954291e-06,
1211
+ "loss": 0.3462,
1212
+ "step": 85000
1213
+ },
1214
+ {
1215
+ "epoch": 1.9752801201339956,
1216
+ "grad_norm": 65.74842071533203,
1217
+ "learning_rate": 6.36785455465144e-06,
1218
+ "loss": 0.3356,
1219
+ "step": 85500
1220
+ },
1221
+ {
1222
+ "epoch": 1.9868314658657735,
1223
+ "grad_norm": 25.529869079589844,
1224
+ "learning_rate": 6.34353584334859e-06,
1225
+ "loss": 0.3418,
1226
+ "step": 86000
1227
+ },
1228
+ {
1229
+ "epoch": 1.9983828115975513,
1230
+ "grad_norm": 0.7819985151290894,
1231
+ "learning_rate": 6.319217132045739e-06,
1232
+ "loss": 0.3299,
1233
+ "step": 86500
1234
+ },
1235
+ {
1236
+ "epoch": 2.0,
1237
+ "eval_accuracy": 0.8750144391821647,
1238
+ "eval_f1": 0.883420247381804,
1239
+ "eval_loss": 0.5118392705917358,
1240
+ "eval_matthews_correlation": 0.7549403916374646,
1241
+ "eval_precision": 0.8371656115989381,
1242
+ "eval_recall": 0.9350850782354819,
1243
+ "eval_runtime": 543.2127,
1244
+ "eval_samples_per_second": 79.683,
1245
+ "eval_steps_per_second": 4.981,
1246
+ "step": 86570
1247
+ },
1248
+ {
1249
+ "epoch": 2.009934157329329,
1250
+ "grad_norm": 0.09307216852903366,
1251
+ "learning_rate": 6.294898420742889e-06,
1252
+ "loss": 0.2056,
1253
+ "step": 87000
1254
+ },
1255
+ {
1256
+ "epoch": 2.0214855030611067,
1257
+ "grad_norm": 0.1265561431646347,
1258
+ "learning_rate": 6.2705797094400376e-06,
1259
+ "loss": 0.201,
1260
+ "step": 87500
1261
+ },
1262
+ {
1263
+ "epoch": 2.0330368487928845,
1264
+ "grad_norm": 0.5366824865341187,
1265
+ "learning_rate": 6.246260998137187e-06,
1266
+ "loss": 0.1897,
1267
+ "step": 88000
1268
+ },
1269
+ {
1270
+ "epoch": 2.0445881945246622,
1271
+ "grad_norm": 37.82719421386719,
1272
+ "learning_rate": 6.221942286834337e-06,
1273
+ "loss": 0.2095,
1274
+ "step": 88500
1275
+ },
1276
+ {
1277
+ "epoch": 2.05613954025644,
1278
+ "grad_norm": 0.5017096996307373,
1279
+ "learning_rate": 6.197623575531486e-06,
1280
+ "loss": 0.1988,
1281
+ "step": 89000
1282
+ },
1283
+ {
1284
+ "epoch": 2.0676908859882177,
1285
+ "grad_norm": 0.10803969949483871,
1286
+ "learning_rate": 6.173304864228635e-06,
1287
+ "loss": 0.1991,
1288
+ "step": 89500
1289
+ },
1290
+ {
1291
+ "epoch": 2.0792422317199954,
1292
+ "grad_norm": 0.3513072431087494,
1293
+ "learning_rate": 6.148986152925785e-06,
1294
+ "loss": 0.2018,
1295
+ "step": 90000
1296
+ },
1297
+ {
1298
+ "epoch": 2.090793577451773,
1299
+ "grad_norm": 1.3080644607543945,
1300
+ "learning_rate": 6.124667441622933e-06,
1301
+ "loss": 0.2095,
1302
+ "step": 90500
1303
+ },
1304
+ {
1305
+ "epoch": 2.102344923183551,
1306
+ "grad_norm": 5.2385053634643555,
1307
+ "learning_rate": 6.100348730320084e-06,
1308
+ "loss": 0.1974,
1309
+ "step": 91000
1310
+ },
1311
+ {
1312
+ "epoch": 2.1138962689153287,
1313
+ "grad_norm": 10.532730102539062,
1314
+ "learning_rate": 6.0760300190172326e-06,
1315
+ "loss": 0.2019,
1316
+ "step": 91500
1317
+ },
1318
+ {
1319
+ "epoch": 2.1254476146471064,
1320
+ "grad_norm": 0.104009710252285,
1321
+ "learning_rate": 6.051711307714382e-06,
1322
+ "loss": 0.2091,
1323
+ "step": 92000
1324
+ },
1325
+ {
1326
+ "epoch": 2.136998960378884,
1327
+ "grad_norm": 0.038262490183115005,
1328
+ "learning_rate": 6.027392596411531e-06,
1329
+ "loss": 0.2422,
1330
+ "step": 92500
1331
+ },
1332
+ {
1333
+ "epoch": 2.148550306110662,
1334
+ "grad_norm": 0.30325543880462646,
1335
+ "learning_rate": 6.003073885108681e-06,
1336
+ "loss": 0.1971,
1337
+ "step": 93000
1338
+ },
1339
+ {
1340
+ "epoch": 2.1601016518424396,
1341
+ "grad_norm": 0.2501794099807739,
1342
+ "learning_rate": 5.97875517380583e-06,
1343
+ "loss": 0.2123,
1344
+ "step": 93500
1345
+ },
1346
+ {
1347
+ "epoch": 2.1716529975742174,
1348
+ "grad_norm": 0.9724676609039307,
1349
+ "learning_rate": 5.95443646250298e-06,
1350
+ "loss": 0.2222,
1351
+ "step": 94000
1352
+ },
1353
+ {
1354
+ "epoch": 2.183204343305995,
1355
+ "grad_norm": 23.507680892944336,
1356
+ "learning_rate": 5.930117751200128e-06,
1357
+ "loss": 0.199,
1358
+ "step": 94500
1359
+ },
1360
+ {
1361
+ "epoch": 2.194755689037773,
1362
+ "grad_norm": 23.95125389099121,
1363
+ "learning_rate": 5.905799039897279e-06,
1364
+ "loss": 0.2367,
1365
+ "step": 95000
1366
+ },
1367
+ {
1368
+ "epoch": 2.2063070347695506,
1369
+ "grad_norm": 0.026822537183761597,
1370
+ "learning_rate": 5.8814803285944275e-06,
1371
+ "loss": 0.1918,
1372
+ "step": 95500
1373
+ },
1374
+ {
1375
+ "epoch": 2.2178583805013283,
1376
+ "grad_norm": 0.11111404001712799,
1377
+ "learning_rate": 5.857161617291577e-06,
1378
+ "loss": 0.2047,
1379
+ "step": 96000
1380
+ },
1381
+ {
1382
+ "epoch": 2.229409726233106,
1383
+ "grad_norm": 0.1919553279876709,
1384
+ "learning_rate": 5.832842905988726e-06,
1385
+ "loss": 0.225,
1386
+ "step": 96500
1387
+ },
1388
+ {
1389
+ "epoch": 2.240961071964884,
1390
+ "grad_norm": 0.44445013999938965,
1391
+ "learning_rate": 5.808524194685876e-06,
1392
+ "loss": 0.2373,
1393
+ "step": 97000
1394
+ },
1395
+ {
1396
+ "epoch": 2.2525124176966616,
1397
+ "grad_norm": 0.0314050130546093,
1398
+ "learning_rate": 5.784205483383025e-06,
1399
+ "loss": 0.2313,
1400
+ "step": 97500
1401
+ },
1402
+ {
1403
+ "epoch": 2.2640637634284393,
1404
+ "grad_norm": 0.3491435945034027,
1405
+ "learning_rate": 5.759886772080175e-06,
1406
+ "loss": 0.2149,
1407
+ "step": 98000
1408
+ },
1409
+ {
1410
+ "epoch": 2.275615109160217,
1411
+ "grad_norm": 0.41699114441871643,
1412
+ "learning_rate": 5.735568060777323e-06,
1413
+ "loss": 0.2012,
1414
+ "step": 98500
1415
+ },
1416
+ {
1417
+ "epoch": 2.2871664548919948,
1418
+ "grad_norm": 0.05766846239566803,
1419
+ "learning_rate": 5.711249349474474e-06,
1420
+ "loss": 0.2247,
1421
+ "step": 99000
1422
+ },
1423
+ {
1424
+ "epoch": 2.2987178006237725,
1425
+ "grad_norm": 10.02811336517334,
1426
+ "learning_rate": 5.6869306381716225e-06,
1427
+ "loss": 0.2081,
1428
+ "step": 99500
1429
+ },
1430
+ {
1431
+ "epoch": 2.3102691463555503,
1432
+ "grad_norm": 0.1262376755475998,
1433
+ "learning_rate": 5.662611926868772e-06,
1434
+ "loss": 0.2471,
1435
+ "step": 100000
1436
+ },
1437
+ {
1438
+ "epoch": 2.321820492087328,
1439
+ "grad_norm": 7.784575939178467,
1440
+ "learning_rate": 5.638293215565921e-06,
1441
+ "loss": 0.2188,
1442
+ "step": 100500
1443
+ },
1444
+ {
1445
+ "epoch": 2.3333718378191057,
1446
+ "grad_norm": 0.4924183189868927,
1447
+ "learning_rate": 5.61397450426307e-06,
1448
+ "loss": 0.182,
1449
+ "step": 101000
1450
+ },
1451
+ {
1452
+ "epoch": 2.3449231835508835,
1453
+ "grad_norm": 0.8090869188308716,
1454
+ "learning_rate": 5.58965579296022e-06,
1455
+ "loss": 0.2307,
1456
+ "step": 101500
1457
+ },
1458
+ {
1459
+ "epoch": 2.3564745292826617,
1460
+ "grad_norm": 0.5244800448417664,
1461
+ "learning_rate": 5.56533708165737e-06,
1462
+ "loss": 0.2162,
1463
+ "step": 102000
1464
+ },
1465
+ {
1466
+ "epoch": 2.368025875014439,
1467
+ "grad_norm": 0.12117698788642883,
1468
+ "learning_rate": 5.541018370354518e-06,
1469
+ "loss": 0.2339,
1470
+ "step": 102500
1471
+ },
1472
+ {
1473
+ "epoch": 2.379577220746217,
1474
+ "grad_norm": 139.68356323242188,
1475
+ "learning_rate": 5.516699659051667e-06,
1476
+ "loss": 0.2306,
1477
+ "step": 103000
1478
+ },
1479
+ {
1480
+ "epoch": 2.391128566477995,
1481
+ "grad_norm": 0.34436774253845215,
1482
+ "learning_rate": 5.4923809477488175e-06,
1483
+ "loss": 0.2289,
1484
+ "step": 103500
1485
+ },
1486
+ {
1487
+ "epoch": 2.4026799122097726,
1488
+ "grad_norm": 0.06684353947639465,
1489
+ "learning_rate": 5.468062236445967e-06,
1490
+ "loss": 0.2388,
1491
+ "step": 104000
1492
+ },
1493
+ {
1494
+ "epoch": 2.4142312579415504,
1495
+ "grad_norm": 38.31502914428711,
1496
+ "learning_rate": 5.443743525143116e-06,
1497
+ "loss": 0.2185,
1498
+ "step": 104500
1499
+ },
1500
+ {
1501
+ "epoch": 2.425782603673328,
1502
+ "grad_norm": 0.47605860233306885,
1503
+ "learning_rate": 5.419424813840265e-06,
1504
+ "loss": 0.2536,
1505
+ "step": 105000
1506
+ },
1507
+ {
1508
+ "epoch": 2.437333949405106,
1509
+ "grad_norm": 2.486711025238037,
1510
+ "learning_rate": 5.395106102537415e-06,
1511
+ "loss": 0.224,
1512
+ "step": 105500
1513
+ },
1514
+ {
1515
+ "epoch": 2.4488852951368836,
1516
+ "grad_norm": 0.19100871682167053,
1517
+ "learning_rate": 5.370787391234565e-06,
1518
+ "loss": 0.2429,
1519
+ "step": 106000
1520
+ },
1521
+ {
1522
+ "epoch": 2.4604366408686613,
1523
+ "grad_norm": 0.6943984627723694,
1524
+ "learning_rate": 5.346468679931713e-06,
1525
+ "loss": 0.2051,
1526
+ "step": 106500
1527
+ },
1528
+ {
1529
+ "epoch": 2.471987986600439,
1530
+ "grad_norm": 100.54878234863281,
1531
+ "learning_rate": 5.322149968628862e-06,
1532
+ "loss": 0.2177,
1533
+ "step": 107000
1534
+ },
1535
+ {
1536
+ "epoch": 2.483539332332217,
1537
+ "grad_norm": 1.6368072032928467,
1538
+ "learning_rate": 5.2978312573260125e-06,
1539
+ "loss": 0.2182,
1540
+ "step": 107500
1541
+ },
1542
+ {
1543
+ "epoch": 2.4950906780639945,
1544
+ "grad_norm": 0.33522534370422363,
1545
+ "learning_rate": 5.273512546023162e-06,
1546
+ "loss": 0.2011,
1547
+ "step": 108000
1548
+ },
1549
+ {
1550
+ "epoch": 2.5066420237957723,
1551
+ "grad_norm": 1.7450274229049683,
1552
+ "learning_rate": 5.249193834720311e-06,
1553
+ "loss": 0.2316,
1554
+ "step": 108500
1555
+ },
1556
+ {
1557
+ "epoch": 2.51819336952755,
1558
+ "grad_norm": 0.29246997833251953,
1559
+ "learning_rate": 5.2248751234174596e-06,
1560
+ "loss": 0.2311,
1561
+ "step": 109000
1562
+ },
1563
+ {
1564
+ "epoch": 2.5297447152593278,
1565
+ "grad_norm": 0.0980878472328186,
1566
+ "learning_rate": 5.20055641211461e-06,
1567
+ "loss": 0.2224,
1568
+ "step": 109500
1569
+ },
1570
+ {
1571
+ "epoch": 2.5412960609911055,
1572
+ "grad_norm": 19.347719192504883,
1573
+ "learning_rate": 5.17623770081176e-06,
1574
+ "loss": 0.2156,
1575
+ "step": 110000
1576
+ },
1577
+ {
1578
+ "epoch": 2.5528474067228832,
1579
+ "grad_norm": 0.4555191099643707,
1580
+ "learning_rate": 5.151918989508908e-06,
1581
+ "loss": 0.2192,
1582
+ "step": 110500
1583
+ },
1584
+ {
1585
+ "epoch": 2.564398752454661,
1586
+ "grad_norm": 8.327524185180664,
1587
+ "learning_rate": 5.127600278206057e-06,
1588
+ "loss": 0.2331,
1589
+ "step": 111000
1590
+ },
1591
+ {
1592
+ "epoch": 2.5759500981864387,
1593
+ "grad_norm": 0.23247091472148895,
1594
+ "learning_rate": 5.103281566903207e-06,
1595
+ "loss": 0.2068,
1596
+ "step": 111500
1597
+ },
1598
+ {
1599
+ "epoch": 2.5875014439182165,
1600
+ "grad_norm": 0.5253803730010986,
1601
+ "learning_rate": 5.078962855600357e-06,
1602
+ "loss": 0.2199,
1603
+ "step": 112000
1604
+ },
1605
+ {
1606
+ "epoch": 2.599052789649994,
1607
+ "grad_norm": 0.16480697691440582,
1608
+ "learning_rate": 5.054644144297506e-06,
1609
+ "loss": 0.2303,
1610
+ "step": 112500
1611
+ },
1612
+ {
1613
+ "epoch": 2.610604135381772,
1614
+ "grad_norm": 13.176816940307617,
1615
+ "learning_rate": 5.0303254329946546e-06,
1616
+ "loss": 0.2463,
1617
+ "step": 113000
1618
+ },
1619
+ {
1620
+ "epoch": 2.6221554811135497,
1621
+ "grad_norm": 0.30696120858192444,
1622
+ "learning_rate": 5.006006721691804e-06,
1623
+ "loss": 0.229,
1624
+ "step": 113500
1625
+ },
1626
+ {
1627
+ "epoch": 2.6337068268453274,
1628
+ "grad_norm": 10.747370719909668,
1629
+ "learning_rate": 4.981688010388954e-06,
1630
+ "loss": 0.239,
1631
+ "step": 114000
1632
+ },
1633
+ {
1634
+ "epoch": 2.645258172577105,
1635
+ "grad_norm": 10.800712585449219,
1636
+ "learning_rate": 4.957369299086103e-06,
1637
+ "loss": 0.2464,
1638
+ "step": 114500
1639
+ },
1640
+ {
1641
+ "epoch": 2.656809518308883,
1642
+ "grad_norm": 137.77926635742188,
1643
+ "learning_rate": 4.933050587783252e-06,
1644
+ "loss": 0.2082,
1645
+ "step": 115000
1646
+ },
1647
+ {
1648
+ "epoch": 2.6683608640406606,
1649
+ "grad_norm": 47.672916412353516,
1650
+ "learning_rate": 4.908731876480402e-06,
1651
+ "loss": 0.2169,
1652
+ "step": 115500
1653
+ },
1654
+ {
1655
+ "epoch": 2.6799122097724384,
1656
+ "grad_norm": 13.38288402557373,
1657
+ "learning_rate": 4.884413165177551e-06,
1658
+ "loss": 0.2316,
1659
+ "step": 116000
1660
+ },
1661
+ {
1662
+ "epoch": 2.691463555504216,
1663
+ "grad_norm": 1.015547275543213,
1664
+ "learning_rate": 4.860094453874701e-06,
1665
+ "loss": 0.2129,
1666
+ "step": 116500
1667
+ },
1668
+ {
1669
+ "epoch": 2.703014901235994,
1670
+ "grad_norm": 0.6633150577545166,
1671
+ "learning_rate": 4.8357757425718495e-06,
1672
+ "loss": 0.2346,
1673
+ "step": 117000
1674
+ },
1675
+ {
1676
+ "epoch": 2.7145662469677716,
1677
+ "grad_norm": 0.4231832027435303,
1678
+ "learning_rate": 4.811457031268999e-06,
1679
+ "loss": 0.2465,
1680
+ "step": 117500
1681
+ },
1682
+ {
1683
+ "epoch": 2.7261175926995493,
1684
+ "grad_norm": 0.1299211084842682,
1685
+ "learning_rate": 4.787138319966149e-06,
1686
+ "loss": 0.2395,
1687
+ "step": 118000
1688
+ },
1689
+ {
1690
+ "epoch": 2.7376689384313275,
1691
+ "grad_norm": 0.05142102763056755,
1692
+ "learning_rate": 4.762819608663298e-06,
1693
+ "loss": 0.2327,
1694
+ "step": 118500
1695
+ },
1696
+ {
1697
+ "epoch": 2.749220284163105,
1698
+ "grad_norm": 0.11573930829763412,
1699
+ "learning_rate": 4.738500897360447e-06,
1700
+ "loss": 0.2035,
1701
+ "step": 119000
1702
+ },
1703
+ {
1704
+ "epoch": 2.760771629894883,
1705
+ "grad_norm": 0.5103738307952881,
1706
+ "learning_rate": 4.714182186057597e-06,
1707
+ "loss": 0.2195,
1708
+ "step": 119500
1709
+ },
1710
+ {
1711
+ "epoch": 2.7723229756266603,
1712
+ "grad_norm": 0.45950281620025635,
1713
+ "learning_rate": 4.689863474754746e-06,
1714
+ "loss": 0.2523,
1715
+ "step": 120000
1716
+ },
1717
+ {
1718
+ "epoch": 2.7838743213584385,
1719
+ "grad_norm": 0.3899378180503845,
1720
+ "learning_rate": 4.665544763451896e-06,
1721
+ "loss": 0.2071,
1722
+ "step": 120500
1723
+ },
1724
+ {
1725
+ "epoch": 2.795425667090216,
1726
+ "grad_norm": 6.975862503051758,
1727
+ "learning_rate": 4.6412260521490445e-06,
1728
+ "loss": 0.2541,
1729
+ "step": 121000
1730
+ },
1731
+ {
1732
+ "epoch": 2.806977012821994,
1733
+ "grad_norm": 34.19971466064453,
1734
+ "learning_rate": 4.616907340846194e-06,
1735
+ "loss": 0.2136,
1736
+ "step": 121500
1737
+ },
1738
+ {
1739
+ "epoch": 2.8185283585537713,
1740
+ "grad_norm": 0.10601510852575302,
1741
+ "learning_rate": 4.592588629543344e-06,
1742
+ "loss": 0.2143,
1743
+ "step": 122000
1744
+ },
1745
+ {
1746
+ "epoch": 2.8300797042855494,
1747
+ "grad_norm": 7.919482231140137,
1748
+ "learning_rate": 4.5682699182404924e-06,
1749
+ "loss": 0.2346,
1750
+ "step": 122500
1751
+ },
1752
+ {
1753
+ "epoch": 2.8416310500173267,
1754
+ "grad_norm": 0.47603341937065125,
1755
+ "learning_rate": 4.543951206937642e-06,
1756
+ "loss": 0.2185,
1757
+ "step": 123000
1758
+ },
1759
+ {
1760
+ "epoch": 2.853182395749105,
1761
+ "grad_norm": 26.099021911621094,
1762
+ "learning_rate": 4.519632495634792e-06,
1763
+ "loss": 0.2307,
1764
+ "step": 123500
1765
+ },
1766
+ {
1767
+ "epoch": 2.8647337414808827,
1768
+ "grad_norm": 0.5455193519592285,
1769
+ "learning_rate": 4.495313784331941e-06,
1770
+ "loss": 0.2057,
1771
+ "step": 124000
1772
+ },
1773
+ {
1774
+ "epoch": 2.8762850872126604,
1775
+ "grad_norm": 2.6065433025360107,
1776
+ "learning_rate": 4.47099507302909e-06,
1777
+ "loss": 0.2273,
1778
+ "step": 124500
1779
+ },
1780
+ {
1781
+ "epoch": 2.887836432944438,
1782
+ "grad_norm": 0.44194814562797546,
1783
+ "learning_rate": 4.4466763617262395e-06,
1784
+ "loss": 0.2237,
1785
+ "step": 125000
1786
+ },
1787
+ {
1788
+ "epoch": 2.899387778676216,
1789
+ "grad_norm": 2.659928321838379,
1790
+ "learning_rate": 4.422357650423389e-06,
1791
+ "loss": 0.2162,
1792
+ "step": 125500
1793
+ },
1794
+ {
1795
+ "epoch": 2.9109391244079936,
1796
+ "grad_norm": 1.6970124244689941,
1797
+ "learning_rate": 4.398038939120539e-06,
1798
+ "loss": 0.2316,
1799
+ "step": 126000
1800
+ },
1801
+ {
1802
+ "epoch": 2.9224904701397714,
1803
+ "grad_norm": 103.99176025390625,
1804
+ "learning_rate": 4.3737202278176874e-06,
1805
+ "loss": 0.2209,
1806
+ "step": 126500
1807
+ },
1808
+ {
1809
+ "epoch": 2.934041815871549,
1810
+ "grad_norm": 0.775031328201294,
1811
+ "learning_rate": 4.349401516514837e-06,
1812
+ "loss": 0.2073,
1813
+ "step": 127000
1814
+ },
1815
+ {
1816
+ "epoch": 2.945593161603327,
1817
+ "grad_norm": 0.6462440490722656,
1818
+ "learning_rate": 4.325082805211987e-06,
1819
+ "loss": 0.2185,
1820
+ "step": 127500
1821
+ },
1822
+ {
1823
+ "epoch": 2.9571445073351046,
1824
+ "grad_norm": 2.127567768096924,
1825
+ "learning_rate": 4.300764093909136e-06,
1826
+ "loss": 0.2188,
1827
+ "step": 128000
1828
+ },
1829
+ {
1830
+ "epoch": 2.9686958530668823,
1831
+ "grad_norm": 0.3134165406227112,
1832
+ "learning_rate": 4.276445382606285e-06,
1833
+ "loss": 0.2495,
1834
+ "step": 128500
1835
+ },
1836
+ {
1837
+ "epoch": 2.98024719879866,
1838
+ "grad_norm": 0.4288942515850067,
1839
+ "learning_rate": 4.2521266713034345e-06,
1840
+ "loss": 0.2171,
1841
+ "step": 129000
1842
+ },
1843
+ {
1844
+ "epoch": 2.991798544530438,
1845
+ "grad_norm": 51.89253234863281,
1846
+ "learning_rate": 4.227807960000584e-06,
1847
+ "loss": 0.1999,
1848
+ "step": 129500
1849
+ },
1850
+ {
1851
+ "epoch": 3.0,
1852
+ "eval_accuracy": 0.887628508721266,
1853
+ "eval_f1": 0.8962368802798873,
1854
+ "eval_loss": 0.683422863483429,
1855
+ "eval_matthews_correlation": 0.7825720644338475,
1856
+ "eval_precision": 0.8417551592867161,
1857
+ "eval_recall": 0.9582592035034898,
1858
+ "eval_runtime": 541.3245,
1859
+ "eval_samples_per_second": 79.961,
1860
+ "eval_steps_per_second": 4.999,
1861
+ "step": 129855
1862
+ }
1863
+ ],
1864
+ "logging_steps": 500,
1865
+ "max_steps": 216425,
1866
+ "num_input_tokens_seen": 0,
1867
+ "num_train_epochs": 5,
1868
+ "save_steps": 200,
1869
+ "stateful_callbacks": {
1870
+ "TrainerControl": {
1871
+ "args": {
1872
+ "should_epoch_stop": false,
1873
+ "should_evaluate": false,
1874
+ "should_log": false,
1875
+ "should_save": true,
1876
+ "should_training_stop": false
1877
+ },
1878
+ "attributes": {}
1879
+ }
1880
+ },
1881
+ "total_flos": 2.8888139448728986e+17,
1882
+ "train_batch_size": 8,
1883
+ "trial_name": null,
1884
+ "trial_params": null
1885
+ }
checkpoint-129855/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e57c2bf20bb8ccd683529a85ac381c4cb1e75777081620776ff48030aab98505
3
+ size 5432
checkpoint-173140/config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/plant-dnamamba-BPE",
3
+ "architectures": [
4
+ "MambaForSequenceClassification"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_mamba.MambaConfig",
8
+ "AutoModel": "modeling_mamba.MambaModel",
9
+ "AutoModelForCausalLM": "modeling_mamba.MambaForCausalLM",
10
+ "AutoModelForSequenceClassification": "modeling_mamba.MambaForSequenceClassification"
11
+ },
12
+ "bos_token_id": 0,
13
+ "conv_kernel": 4,
14
+ "d_inner": 1536,
15
+ "d_model": 768,
16
+ "eos_token_id": 0,
17
+ "expand": 2,
18
+ "fused_add_norm": true,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 768,
21
+ "initializer_range": 0.1,
22
+ "intermediate_size": 1536,
23
+ "layer_norm_epsilon": 1e-05,
24
+ "model_type": "mamba",
25
+ "n_layer": 24,
26
+ "num_hidden_layers": 24,
27
+ "pad_token_id": 0,
28
+ "pad_vocab_size_multiple": 8,
29
+ "problem_type": "single_label_classification",
30
+ "rescale_prenorm_residual": false,
31
+ "residual_in_fp32": true,
32
+ "rms_norm": true,
33
+ "ssm_cfg": {},
34
+ "state_size": 16,
35
+ "time_step_floor": 0.0001,
36
+ "time_step_init_scheme": "random",
37
+ "time_step_max": 0.1,
38
+ "time_step_min": 0.001,
39
+ "time_step_rank": 48,
40
+ "time_step_scale": 1.0,
41
+ "torch_dtype": "float32",
42
+ "transformers_version": "4.45.2",
43
+ "use_bias": false,
44
+ "use_cache": false,
45
+ "use_conv_bias": true,
46
+ "use_mambapy": true,
47
+ "vocab_size": 8000
48
+ }
checkpoint-173140/configuration_mamba.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MAMBA configuration"""
16
+
17
+ import math
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+
26
+ class MambaConfig(PretrainedConfig):
27
+ """
28
+ This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA
29
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
30
+ defaults will yield a similar configuration to that of the MAMBA
31
+ [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture.
32
+
33
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
34
+ documentation from [`PretrainedConfig`] for more information.
35
+
36
+
37
+ Args:
38
+ vocab_size (`int`, *optional*, defaults to 50280):
39
+ Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the
40
+ `inputs_ids` passed when calling [`MambaModel`].
41
+ hidden_size (`int`, *optional*, defaults to 768):
42
+ Dimensionality of the embeddings and hidden states.
43
+ state_size (`int`, *optional*, defaults to 16): shape of the state space latents.
44
+ num_hidden_layers (`int`, *optional*, defaults to 32):
45
+ Number of hidden layers in the model.
46
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
47
+ The epsilon to use in the layer normalization layers.
48
+ pad_token_id (`int`, *optional*, defaults to 0):
49
+ Padding token id.
50
+ bos_token_id (`int`, *optional*, defaults to 0):
51
+ The id of the beginning of sentence token in the vocabulary.
52
+ eos_token_id (`int`, *optional*, defaults to 0):
53
+ The id of the end of sentence token in the vocabulary.
54
+ expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
55
+ conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
56
+ use_bias (`bool`, *optional*, defaults to `False`):
57
+ Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
58
+ use_conv_bias (`bool`, *optional*, defaults to `True`):
59
+ Whether or not to use bias in the convolution layer of the mixer block.
60
+ hidden_act (`str`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ initializer_range (`float`, *optional*, defaults to 0.1):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ residual_in_fp32 (`bool`, *optional*, defaults to `True`):
65
+ Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
66
+ time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
67
+ Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
68
+ time_step_scale (`float`, *optional*, defaults to 1.0):
69
+ Scale used used to scale `dt_proj.bias`.
70
+ time_step_min (`float`, *optional*, defaults to 0.001):
71
+ Minimum `time_step` used to bound `dt_proj.bias`.
72
+ time_step_max (`float`, *optional*, defaults to 0.1):
73
+ Maximum `time_step` used to bound `dt_proj.bias`.
74
+ time_step_init_scheme (`float`, *optional*, defaults to `"random"`):
75
+ Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`
76
+ time_step_floor (`float`, *optional*, defaults to 0.0001):
77
+ Minimum clamping value of the `dt_proj.bias` layer initialization.
78
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `False`):
79
+ Whether or not to rescale `out_proj` weights when initializing.
80
+ use_cache (`bool`, *optional*, defaults to `True`):
81
+ Whether or not the cache should be used.
82
+ use_mambapy (`bool`, *optional*, defaults to `False`):
83
+ Determines the fallback strategy during training if the CUDA-based official implementation of Mamba is not avaiable. If `True`, the mamba.py implementation is used. If `False`, the naive and slower implementation is used. Consider switching to the naive version if memory is limited.
84
+
85
+
86
+ Example:
87
+
88
+ ```python
89
+ >>> from transformers import MambaConfig, MambaModel
90
+
91
+ >>> # Initializing a Mamba configuration
92
+ >>> configuration = MambaConfig()
93
+
94
+ >>> # Initializing a model (with random weights) from the configuration
95
+ >>> model = MambaModel(configuration)
96
+
97
+ >>> # Accessing the model configuration
98
+ >>> configuration = model.config
99
+ ```"""
100
+
101
+ model_type = "mamba"
102
+
103
+ def __init__(
104
+ self,
105
+ vocab_size=50280,
106
+ hidden_size=768,
107
+ state_size=16,
108
+ num_hidden_layers=32,
109
+ layer_norm_epsilon=1e-5,
110
+ pad_token_id=0,
111
+ bos_token_id=0,
112
+ eos_token_id=0,
113
+ expand=2,
114
+ conv_kernel=4,
115
+ use_bias=False,
116
+ use_conv_bias=True,
117
+ hidden_act="silu",
118
+ initializer_range=0.1,
119
+ residual_in_fp32=True,
120
+ time_step_rank="auto",
121
+ time_step_scale=1.0,
122
+ time_step_min=0.001,
123
+ time_step_max=0.1,
124
+ time_step_init_scheme="random",
125
+ time_step_floor=1e-4,
126
+ rescale_prenorm_residual=False,
127
+ use_cache=True,
128
+ use_mambapy=False,
129
+ **kwargs,
130
+ ):
131
+ self.vocab_size = vocab_size
132
+ self.hidden_size = hidden_size
133
+ self.state_size = state_size
134
+ self.num_hidden_layers = num_hidden_layers
135
+ self.layer_norm_epsilon = layer_norm_epsilon
136
+ self.conv_kernel = conv_kernel
137
+ self.expand = expand
138
+ self.intermediate_size = int(expand * self.hidden_size)
139
+ self.bos_token_id = bos_token_id
140
+ self.eos_token_id = eos_token_id
141
+ self.pad_token_id = pad_token_id
142
+ self.use_bias = use_bias
143
+ self.use_conv_bias = use_conv_bias
144
+ self.hidden_act = hidden_act
145
+ self.initializer_range = initializer_range
146
+ self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
147
+ self.time_step_scale = time_step_scale
148
+ self.time_step_min = time_step_min
149
+ self.time_step_max = time_step_max
150
+ self.time_step_init_scheme = time_step_init_scheme
151
+ self.time_step_floor = time_step_floor
152
+ self.rescale_prenorm_residual = rescale_prenorm_residual
153
+ self.residual_in_fp32 = residual_in_fp32
154
+ self.use_cache = use_cache
155
+ self.use_mambapy = use_mambapy
156
+
157
+ super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, pad_token_id=pad_token_id, **kwargs)
checkpoint-173140/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab164a1bcbf5f634e71959e384a7fcd27bca650bea570e4676938abc8c9bf9f2
3
+ size 386689624
checkpoint-173140/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4764cf09a350af44b00c6d874f2e6cd4c5826ab1dd89f954360b1c74d6b34c06
3
+ size 773531386
checkpoint-173140/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3092558c13fe7575c8f03a6e8025ff8c0cae6f9b5f0a7cc65e64952af89fc65f
3
+ size 14244
checkpoint-173140/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1cb434e42cfd8510ce8f32d14437ceade1d955bbfe7724829b03056238adf54
3
+ size 1064
checkpoint-173140/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
checkpoint-173140/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-173140/tokenizer_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": false,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<|padding|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ }
22
+ },
23
+ "bos_token": "<|endoftext|>",
24
+ "clean_up_tokenization_spaces": true,
25
+ "eos_token": "<|endoftext|>",
26
+ "max_length": 1024,
27
+ "model_max_length": 512,
28
+ "pad_to_multiple_of": null,
29
+ "pad_token": "<|endoftext|>",
30
+ "pad_token_type_id": 0,
31
+ "padding_side": "right",
32
+ "stride": 0,
33
+ "tokenizer_class": "GPTNeoXTokenizer",
34
+ "truncation_side": "right",
35
+ "truncation_strategy": "longest_first",
36
+ "unk_token": "<|endoftext|>"
37
+ }
checkpoint-173140/trainer_state.json ADDED
@@ -0,0 +1,2507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.8962368802798873,
3
+ "best_model_checkpoint": "finetune/models/plant-dnamamba-BPE-NoduleExpDup/checkpoint-129855",
4
+ "epoch": 4.0,
5
+ "eval_steps": 500,
6
+ "global_step": 173140,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.011551345731777752,
13
+ "grad_norm": 29.45172119140625,
14
+ "learning_rate": 4.620218074293107e-07,
15
+ "loss": 1.0687,
16
+ "step": 500
17
+ },
18
+ {
19
+ "epoch": 0.023102691463555505,
20
+ "grad_norm": 31.642974853515625,
21
+ "learning_rate": 9.240436148586214e-07,
22
+ "loss": 0.9807,
23
+ "step": 1000
24
+ },
25
+ {
26
+ "epoch": 0.034654037195333255,
27
+ "grad_norm": 12.195789337158203,
28
+ "learning_rate": 1.3860654222879322e-06,
29
+ "loss": 0.8373,
30
+ "step": 1500
31
+ },
32
+ {
33
+ "epoch": 0.04620538292711101,
34
+ "grad_norm": 13.536879539489746,
35
+ "learning_rate": 1.8480872297172428e-06,
36
+ "loss": 0.7818,
37
+ "step": 2000
38
+ },
39
+ {
40
+ "epoch": 0.05775672865888876,
41
+ "grad_norm": 13.85306453704834,
42
+ "learning_rate": 2.3101090371465534e-06,
43
+ "loss": 0.7234,
44
+ "step": 2500
45
+ },
46
+ {
47
+ "epoch": 0.06930807439066651,
48
+ "grad_norm": 11.797886848449707,
49
+ "learning_rate": 2.7721308445758644e-06,
50
+ "loss": 0.6925,
51
+ "step": 3000
52
+ },
53
+ {
54
+ "epoch": 0.08085942012244426,
55
+ "grad_norm": 6.770650386810303,
56
+ "learning_rate": 3.2341526520051746e-06,
57
+ "loss": 0.6751,
58
+ "step": 3500
59
+ },
60
+ {
61
+ "epoch": 0.09241076585422202,
62
+ "grad_norm": 10.289820671081543,
63
+ "learning_rate": 3.6961744594344856e-06,
64
+ "loss": 0.6636,
65
+ "step": 4000
66
+ },
67
+ {
68
+ "epoch": 0.10396211158599977,
69
+ "grad_norm": 15.198243141174316,
70
+ "learning_rate": 4.158196266863797e-06,
71
+ "loss": 0.6563,
72
+ "step": 4500
73
+ },
74
+ {
75
+ "epoch": 0.11551345731777753,
76
+ "grad_norm": 13.971009254455566,
77
+ "learning_rate": 4.620218074293107e-06,
78
+ "loss": 0.6481,
79
+ "step": 5000
80
+ },
81
+ {
82
+ "epoch": 0.12706480304955528,
83
+ "grad_norm": 8.441276550292969,
84
+ "learning_rate": 5.082239881722417e-06,
85
+ "loss": 0.6549,
86
+ "step": 5500
87
+ },
88
+ {
89
+ "epoch": 0.13861614878133302,
90
+ "grad_norm": 14.8948335647583,
91
+ "learning_rate": 5.544261689151729e-06,
92
+ "loss": 0.6507,
93
+ "step": 6000
94
+ },
95
+ {
96
+ "epoch": 0.1501674945131108,
97
+ "grad_norm": 7.725317478179932,
98
+ "learning_rate": 6.006283496581039e-06,
99
+ "loss": 0.6544,
100
+ "step": 6500
101
+ },
102
+ {
103
+ "epoch": 0.16171884024488853,
104
+ "grad_norm": 17.12333106994629,
105
+ "learning_rate": 6.468305304010349e-06,
106
+ "loss": 0.6476,
107
+ "step": 7000
108
+ },
109
+ {
110
+ "epoch": 0.17327018597666627,
111
+ "grad_norm": 7.033656120300293,
112
+ "learning_rate": 6.930327111439661e-06,
113
+ "loss": 0.6496,
114
+ "step": 7500
115
+ },
116
+ {
117
+ "epoch": 0.18482153170844404,
118
+ "grad_norm": 7.053399085998535,
119
+ "learning_rate": 7.392348918868971e-06,
120
+ "loss": 0.6523,
121
+ "step": 8000
122
+ },
123
+ {
124
+ "epoch": 0.19637287744022178,
125
+ "grad_norm": 6.655728340148926,
126
+ "learning_rate": 7.854370726298282e-06,
127
+ "loss": 0.6455,
128
+ "step": 8500
129
+ },
130
+ {
131
+ "epoch": 0.20792422317199954,
132
+ "grad_norm": 8.08369255065918,
133
+ "learning_rate": 8.316392533727593e-06,
134
+ "loss": 0.6415,
135
+ "step": 9000
136
+ },
137
+ {
138
+ "epoch": 0.21947556890377729,
139
+ "grad_norm": 3.947148084640503,
140
+ "learning_rate": 8.778414341156903e-06,
141
+ "loss": 0.6514,
142
+ "step": 9500
143
+ },
144
+ {
145
+ "epoch": 0.23102691463555505,
146
+ "grad_norm": 8.264023780822754,
147
+ "learning_rate": 9.240436148586214e-06,
148
+ "loss": 0.6431,
149
+ "step": 10000
150
+ },
151
+ {
152
+ "epoch": 0.2425782603673328,
153
+ "grad_norm": 4.74052095413208,
154
+ "learning_rate": 9.702457956015525e-06,
155
+ "loss": 0.6331,
156
+ "step": 10500
157
+ },
158
+ {
159
+ "epoch": 0.25412960609911056,
160
+ "grad_norm": 12.625064849853516,
161
+ "learning_rate": 9.991342538776186e-06,
162
+ "loss": 0.6449,
163
+ "step": 11000
164
+ },
165
+ {
166
+ "epoch": 0.2656809518308883,
167
+ "grad_norm": 17.814157485961914,
168
+ "learning_rate": 9.967023827473336e-06,
169
+ "loss": 0.6414,
170
+ "step": 11500
171
+ },
172
+ {
173
+ "epoch": 0.27723229756266604,
174
+ "grad_norm": 15.34868049621582,
175
+ "learning_rate": 9.942705116170485e-06,
176
+ "loss": 0.6287,
177
+ "step": 12000
178
+ },
179
+ {
180
+ "epoch": 0.2887836432944438,
181
+ "grad_norm": 5.785658836364746,
182
+ "learning_rate": 9.918386404867634e-06,
183
+ "loss": 0.6417,
184
+ "step": 12500
185
+ },
186
+ {
187
+ "epoch": 0.3003349890262216,
188
+ "grad_norm": 3.7156598567962646,
189
+ "learning_rate": 9.894067693564784e-06,
190
+ "loss": 0.6436,
191
+ "step": 13000
192
+ },
193
+ {
194
+ "epoch": 0.3118863347579993,
195
+ "grad_norm": 12.272802352905273,
196
+ "learning_rate": 9.869748982261933e-06,
197
+ "loss": 0.6505,
198
+ "step": 13500
199
+ },
200
+ {
201
+ "epoch": 0.32343768048977706,
202
+ "grad_norm": 15.592353820800781,
203
+ "learning_rate": 9.845430270959082e-06,
204
+ "loss": 0.6313,
205
+ "step": 14000
206
+ },
207
+ {
208
+ "epoch": 0.3349890262215548,
209
+ "grad_norm": 4.684946537017822,
210
+ "learning_rate": 9.821111559656232e-06,
211
+ "loss": 0.6457,
212
+ "step": 14500
213
+ },
214
+ {
215
+ "epoch": 0.34654037195333254,
216
+ "grad_norm": 6.0165300369262695,
217
+ "learning_rate": 9.79679284835338e-06,
218
+ "loss": 0.641,
219
+ "step": 15000
220
+ },
221
+ {
222
+ "epoch": 0.35809171768511033,
223
+ "grad_norm": 10.924590110778809,
224
+ "learning_rate": 9.772474137050531e-06,
225
+ "loss": 0.6293,
226
+ "step": 15500
227
+ },
228
+ {
229
+ "epoch": 0.3696430634168881,
230
+ "grad_norm": 7.296951770782471,
231
+ "learning_rate": 9.74815542574768e-06,
232
+ "loss": 0.627,
233
+ "step": 16000
234
+ },
235
+ {
236
+ "epoch": 0.3811944091486658,
237
+ "grad_norm": 14.750964164733887,
238
+ "learning_rate": 9.723836714444829e-06,
239
+ "loss": 0.6254,
240
+ "step": 16500
241
+ },
242
+ {
243
+ "epoch": 0.39274575488044355,
244
+ "grad_norm": 7.377384662628174,
245
+ "learning_rate": 9.699518003141977e-06,
246
+ "loss": 0.6287,
247
+ "step": 17000
248
+ },
249
+ {
250
+ "epoch": 0.40429710061222135,
251
+ "grad_norm": 6.540673732757568,
252
+ "learning_rate": 9.675199291839128e-06,
253
+ "loss": 0.6309,
254
+ "step": 17500
255
+ },
256
+ {
257
+ "epoch": 0.4158484463439991,
258
+ "grad_norm": 6.4445295333862305,
259
+ "learning_rate": 9.650880580536277e-06,
260
+ "loss": 0.6302,
261
+ "step": 18000
262
+ },
263
+ {
264
+ "epoch": 0.42739979207577683,
265
+ "grad_norm": 7.85426664352417,
266
+ "learning_rate": 9.626561869233427e-06,
267
+ "loss": 0.6201,
268
+ "step": 18500
269
+ },
270
+ {
271
+ "epoch": 0.43895113780755457,
272
+ "grad_norm": 5.960766792297363,
273
+ "learning_rate": 9.602243157930576e-06,
274
+ "loss": 0.626,
275
+ "step": 19000
276
+ },
277
+ {
278
+ "epoch": 0.4505024835393323,
279
+ "grad_norm": 9.376029968261719,
280
+ "learning_rate": 9.577924446627726e-06,
281
+ "loss": 0.6164,
282
+ "step": 19500
283
+ },
284
+ {
285
+ "epoch": 0.4620538292711101,
286
+ "grad_norm": 17.05916976928711,
287
+ "learning_rate": 9.553605735324875e-06,
288
+ "loss": 0.6232,
289
+ "step": 20000
290
+ },
291
+ {
292
+ "epoch": 0.47360517500288785,
293
+ "grad_norm": 7.523278713226318,
294
+ "learning_rate": 9.529287024022024e-06,
295
+ "loss": 0.601,
296
+ "step": 20500
297
+ },
298
+ {
299
+ "epoch": 0.4851565207346656,
300
+ "grad_norm": 6.75208044052124,
301
+ "learning_rate": 9.504968312719172e-06,
302
+ "loss": 0.6157,
303
+ "step": 21000
304
+ },
305
+ {
306
+ "epoch": 0.4967078664664433,
307
+ "grad_norm": 12.919069290161133,
308
+ "learning_rate": 9.480649601416323e-06,
309
+ "loss": 0.6098,
310
+ "step": 21500
311
+ },
312
+ {
313
+ "epoch": 0.5082592121982211,
314
+ "grad_norm": 19.89571189880371,
315
+ "learning_rate": 9.456330890113472e-06,
316
+ "loss": 0.6131,
317
+ "step": 22000
318
+ },
319
+ {
320
+ "epoch": 0.5198105579299989,
321
+ "grad_norm": 7.436350345611572,
322
+ "learning_rate": 9.432012178810622e-06,
323
+ "loss": 0.6121,
324
+ "step": 22500
325
+ },
326
+ {
327
+ "epoch": 0.5313619036617766,
328
+ "grad_norm": 10.576221466064453,
329
+ "learning_rate": 9.40769346750777e-06,
330
+ "loss": 0.6005,
331
+ "step": 23000
332
+ },
333
+ {
334
+ "epoch": 0.5429132493935543,
335
+ "grad_norm": 12.731452941894531,
336
+ "learning_rate": 9.383374756204921e-06,
337
+ "loss": 0.6217,
338
+ "step": 23500
339
+ },
340
+ {
341
+ "epoch": 0.5544645951253321,
342
+ "grad_norm": 9.219985961914062,
343
+ "learning_rate": 9.35905604490207e-06,
344
+ "loss": 0.5985,
345
+ "step": 24000
346
+ },
347
+ {
348
+ "epoch": 0.5660159408571098,
349
+ "grad_norm": 12.659988403320312,
350
+ "learning_rate": 9.334737333599219e-06,
351
+ "loss": 0.5951,
352
+ "step": 24500
353
+ },
354
+ {
355
+ "epoch": 0.5775672865888876,
356
+ "grad_norm": 32.879234313964844,
357
+ "learning_rate": 9.310418622296367e-06,
358
+ "loss": 0.5961,
359
+ "step": 25000
360
+ },
361
+ {
362
+ "epoch": 0.5891186323206653,
363
+ "grad_norm": 9.763121604919434,
364
+ "learning_rate": 9.286099910993518e-06,
365
+ "loss": 0.6081,
366
+ "step": 25500
367
+ },
368
+ {
369
+ "epoch": 0.6006699780524432,
370
+ "grad_norm": 9.057079315185547,
371
+ "learning_rate": 9.261781199690667e-06,
372
+ "loss": 0.5863,
373
+ "step": 26000
374
+ },
375
+ {
376
+ "epoch": 0.6122213237842209,
377
+ "grad_norm": 17.381582260131836,
378
+ "learning_rate": 9.237462488387817e-06,
379
+ "loss": 0.5857,
380
+ "step": 26500
381
+ },
382
+ {
383
+ "epoch": 0.6237726695159986,
384
+ "grad_norm": 18.3979549407959,
385
+ "learning_rate": 9.213143777084966e-06,
386
+ "loss": 0.5853,
387
+ "step": 27000
388
+ },
389
+ {
390
+ "epoch": 0.6353240152477764,
391
+ "grad_norm": 11.13846206665039,
392
+ "learning_rate": 9.188825065782114e-06,
393
+ "loss": 0.582,
394
+ "step": 27500
395
+ },
396
+ {
397
+ "epoch": 0.6468753609795541,
398
+ "grad_norm": 8.506060600280762,
399
+ "learning_rate": 9.164506354479265e-06,
400
+ "loss": 0.5773,
401
+ "step": 28000
402
+ },
403
+ {
404
+ "epoch": 0.6584267067113319,
405
+ "grad_norm": 9.814123153686523,
406
+ "learning_rate": 9.140187643176414e-06,
407
+ "loss": 0.5797,
408
+ "step": 28500
409
+ },
410
+ {
411
+ "epoch": 0.6699780524431096,
412
+ "grad_norm": 11.70206069946289,
413
+ "learning_rate": 9.115868931873562e-06,
414
+ "loss": 0.5587,
415
+ "step": 29000
416
+ },
417
+ {
418
+ "epoch": 0.6815293981748873,
419
+ "grad_norm": 11.949383735656738,
420
+ "learning_rate": 9.091550220570711e-06,
421
+ "loss": 0.5769,
422
+ "step": 29500
423
+ },
424
+ {
425
+ "epoch": 0.6930807439066651,
426
+ "grad_norm": 11.51278018951416,
427
+ "learning_rate": 9.067231509267862e-06,
428
+ "loss": 0.5798,
429
+ "step": 30000
430
+ },
431
+ {
432
+ "epoch": 0.7046320896384429,
433
+ "grad_norm": 13.705997467041016,
434
+ "learning_rate": 9.042912797965012e-06,
435
+ "loss": 0.5585,
436
+ "step": 30500
437
+ },
438
+ {
439
+ "epoch": 0.7161834353702207,
440
+ "grad_norm": 18.50054168701172,
441
+ "learning_rate": 9.01859408666216e-06,
442
+ "loss": 0.5656,
443
+ "step": 31000
444
+ },
445
+ {
446
+ "epoch": 0.7277347811019984,
447
+ "grad_norm": 23.920692443847656,
448
+ "learning_rate": 8.99427537535931e-06,
449
+ "loss": 0.5493,
450
+ "step": 31500
451
+ },
452
+ {
453
+ "epoch": 0.7392861268337761,
454
+ "grad_norm": 15.425186157226562,
455
+ "learning_rate": 8.96995666405646e-06,
456
+ "loss": 0.5538,
457
+ "step": 32000
458
+ },
459
+ {
460
+ "epoch": 0.7508374725655539,
461
+ "grad_norm": 11.077018737792969,
462
+ "learning_rate": 8.945637952753609e-06,
463
+ "loss": 0.5553,
464
+ "step": 32500
465
+ },
466
+ {
467
+ "epoch": 0.7623888182973316,
468
+ "grad_norm": 17.128128051757812,
469
+ "learning_rate": 8.921319241450757e-06,
470
+ "loss": 0.5657,
471
+ "step": 33000
472
+ },
473
+ {
474
+ "epoch": 0.7739401640291094,
475
+ "grad_norm": 37.22261428833008,
476
+ "learning_rate": 8.897000530147906e-06,
477
+ "loss": 0.5573,
478
+ "step": 33500
479
+ },
480
+ {
481
+ "epoch": 0.7854915097608871,
482
+ "grad_norm": 18.649255752563477,
483
+ "learning_rate": 8.872681818845056e-06,
484
+ "loss": 0.5337,
485
+ "step": 34000
486
+ },
487
+ {
488
+ "epoch": 0.7970428554926648,
489
+ "grad_norm": 11.49511432647705,
490
+ "learning_rate": 8.848363107542205e-06,
491
+ "loss": 0.5442,
492
+ "step": 34500
493
+ },
494
+ {
495
+ "epoch": 0.8085942012244427,
496
+ "grad_norm": 20.64067840576172,
497
+ "learning_rate": 8.824044396239356e-06,
498
+ "loss": 0.5632,
499
+ "step": 35000
500
+ },
501
+ {
502
+ "epoch": 0.8201455469562204,
503
+ "grad_norm": 18.51673126220703,
504
+ "learning_rate": 8.799725684936504e-06,
505
+ "loss": 0.5454,
506
+ "step": 35500
507
+ },
508
+ {
509
+ "epoch": 0.8316968926879982,
510
+ "grad_norm": 18.139997482299805,
511
+ "learning_rate": 8.775406973633655e-06,
512
+ "loss": 0.5371,
513
+ "step": 36000
514
+ },
515
+ {
516
+ "epoch": 0.8432482384197759,
517
+ "grad_norm": 21.852693557739258,
518
+ "learning_rate": 8.751088262330804e-06,
519
+ "loss": 0.539,
520
+ "step": 36500
521
+ },
522
+ {
523
+ "epoch": 0.8547995841515537,
524
+ "grad_norm": 31.914161682128906,
525
+ "learning_rate": 8.726769551027952e-06,
526
+ "loss": 0.5383,
527
+ "step": 37000
528
+ },
529
+ {
530
+ "epoch": 0.8663509298833314,
531
+ "grad_norm": 9.24451732635498,
532
+ "learning_rate": 8.702450839725101e-06,
533
+ "loss": 0.548,
534
+ "step": 37500
535
+ },
536
+ {
537
+ "epoch": 0.8779022756151091,
538
+ "grad_norm": 27.3262882232666,
539
+ "learning_rate": 8.678132128422251e-06,
540
+ "loss": 0.524,
541
+ "step": 38000
542
+ },
543
+ {
544
+ "epoch": 0.8894536213468869,
545
+ "grad_norm": 23.231952667236328,
546
+ "learning_rate": 8.6538134171194e-06,
547
+ "loss": 0.5121,
548
+ "step": 38500
549
+ },
550
+ {
551
+ "epoch": 0.9010049670786646,
552
+ "grad_norm": 28.35572052001953,
553
+ "learning_rate": 8.62949470581655e-06,
554
+ "loss": 0.5333,
555
+ "step": 39000
556
+ },
557
+ {
558
+ "epoch": 0.9125563128104425,
559
+ "grad_norm": 24.61405372619629,
560
+ "learning_rate": 8.6051759945137e-06,
561
+ "loss": 0.5015,
562
+ "step": 39500
563
+ },
564
+ {
565
+ "epoch": 0.9241076585422202,
566
+ "grad_norm": 30.62819480895996,
567
+ "learning_rate": 8.580857283210848e-06,
568
+ "loss": 0.5092,
569
+ "step": 40000
570
+ },
571
+ {
572
+ "epoch": 0.935659004273998,
573
+ "grad_norm": 19.295331954956055,
574
+ "learning_rate": 8.556538571907999e-06,
575
+ "loss": 0.5125,
576
+ "step": 40500
577
+ },
578
+ {
579
+ "epoch": 0.9472103500057757,
580
+ "grad_norm": 10.785584449768066,
581
+ "learning_rate": 8.532219860605147e-06,
582
+ "loss": 0.5161,
583
+ "step": 41000
584
+ },
585
+ {
586
+ "epoch": 0.9587616957375534,
587
+ "grad_norm": 24.81365394592285,
588
+ "learning_rate": 8.507901149302296e-06,
589
+ "loss": 0.498,
590
+ "step": 41500
591
+ },
592
+ {
593
+ "epoch": 0.9703130414693312,
594
+ "grad_norm": 48.026973724365234,
595
+ "learning_rate": 8.483582437999446e-06,
596
+ "loss": 0.4916,
597
+ "step": 42000
598
+ },
599
+ {
600
+ "epoch": 0.9818643872011089,
601
+ "grad_norm": 6.653892517089844,
602
+ "learning_rate": 8.459263726696595e-06,
603
+ "loss": 0.4944,
604
+ "step": 42500
605
+ },
606
+ {
607
+ "epoch": 0.9934157329328867,
608
+ "grad_norm": 39.814945220947266,
609
+ "learning_rate": 8.434945015393746e-06,
610
+ "loss": 0.4954,
611
+ "step": 43000
612
+ },
613
+ {
614
+ "epoch": 1.0,
615
+ "eval_accuracy": 0.7796003234376805,
616
+ "eval_f1": 0.7890500619140279,
617
+ "eval_loss": 0.4737609326839447,
618
+ "eval_matthews_correlation": 0.5599099791389662,
619
+ "eval_precision": 0.7656524910955671,
620
+ "eval_recall": 0.8139227225035354,
621
+ "eval_runtime": 550.4755,
622
+ "eval_samples_per_second": 78.632,
623
+ "eval_steps_per_second": 4.916,
624
+ "step": 43285
625
+ },
626
+ {
627
+ "epoch": 1.0049670786646645,
628
+ "grad_norm": 22.5259952545166,
629
+ "learning_rate": 8.410626304090894e-06,
630
+ "loss": 0.4339,
631
+ "step": 43500
632
+ },
633
+ {
634
+ "epoch": 1.0165184243964422,
635
+ "grad_norm": 25.5583438873291,
636
+ "learning_rate": 8.386307592788043e-06,
637
+ "loss": 0.3274,
638
+ "step": 44000
639
+ },
640
+ {
641
+ "epoch": 1.02806977012822,
642
+ "grad_norm": 9.524903297424316,
643
+ "learning_rate": 8.361988881485194e-06,
644
+ "loss": 0.3372,
645
+ "step": 44500
646
+ },
647
+ {
648
+ "epoch": 1.0396211158599977,
649
+ "grad_norm": 37.77333068847656,
650
+ "learning_rate": 8.337670170182342e-06,
651
+ "loss": 0.3471,
652
+ "step": 45000
653
+ },
654
+ {
655
+ "epoch": 1.0511724615917755,
656
+ "grad_norm": 47.2765007019043,
657
+ "learning_rate": 8.313351458879491e-06,
658
+ "loss": 0.3436,
659
+ "step": 45500
660
+ },
661
+ {
662
+ "epoch": 1.0627238073235532,
663
+ "grad_norm": 58.79914474487305,
664
+ "learning_rate": 8.289032747576641e-06,
665
+ "loss": 0.3524,
666
+ "step": 46000
667
+ },
668
+ {
669
+ "epoch": 1.074275153055331,
670
+ "grad_norm": 23.910385131835938,
671
+ "learning_rate": 8.26471403627379e-06,
672
+ "loss": 0.3283,
673
+ "step": 46500
674
+ },
675
+ {
676
+ "epoch": 1.0858264987871087,
677
+ "grad_norm": 65.18335723876953,
678
+ "learning_rate": 8.24039532497094e-06,
679
+ "loss": 0.3529,
680
+ "step": 47000
681
+ },
682
+ {
683
+ "epoch": 1.0973778445188864,
684
+ "grad_norm": 29.333236694335938,
685
+ "learning_rate": 8.21607661366809e-06,
686
+ "loss": 0.3471,
687
+ "step": 47500
688
+ },
689
+ {
690
+ "epoch": 1.1089291902506642,
691
+ "grad_norm": 41.33262252807617,
692
+ "learning_rate": 8.191757902365238e-06,
693
+ "loss": 0.3507,
694
+ "step": 48000
695
+ },
696
+ {
697
+ "epoch": 1.120480535982442,
698
+ "grad_norm": 91.7591781616211,
699
+ "learning_rate": 8.167439191062387e-06,
700
+ "loss": 0.3506,
701
+ "step": 48500
702
+ },
703
+ {
704
+ "epoch": 1.1320318817142196,
705
+ "grad_norm": 7.366957187652588,
706
+ "learning_rate": 8.143120479759537e-06,
707
+ "loss": 0.3526,
708
+ "step": 49000
709
+ },
710
+ {
711
+ "epoch": 1.1435832274459974,
712
+ "grad_norm": 31.159626007080078,
713
+ "learning_rate": 8.118801768456686e-06,
714
+ "loss": 0.3423,
715
+ "step": 49500
716
+ },
717
+ {
718
+ "epoch": 1.1551345731777751,
719
+ "grad_norm": 15.73024845123291,
720
+ "learning_rate": 8.094483057153836e-06,
721
+ "loss": 0.3516,
722
+ "step": 50000
723
+ },
724
+ {
725
+ "epoch": 1.1666859189095529,
726
+ "grad_norm": 66.58766174316406,
727
+ "learning_rate": 8.070164345850985e-06,
728
+ "loss": 0.3435,
729
+ "step": 50500
730
+ },
731
+ {
732
+ "epoch": 1.1782372646413308,
733
+ "grad_norm": 48.182308197021484,
734
+ "learning_rate": 8.045845634548136e-06,
735
+ "loss": 0.344,
736
+ "step": 51000
737
+ },
738
+ {
739
+ "epoch": 1.1897886103731086,
740
+ "grad_norm": 42.61834716796875,
741
+ "learning_rate": 8.021526923245284e-06,
742
+ "loss": 0.3739,
743
+ "step": 51500
744
+ },
745
+ {
746
+ "epoch": 1.2013399561048863,
747
+ "grad_norm": 59.38914489746094,
748
+ "learning_rate": 7.997208211942433e-06,
749
+ "loss": 0.3399,
750
+ "step": 52000
751
+ },
752
+ {
753
+ "epoch": 1.212891301836664,
754
+ "grad_norm": 5.7647857666015625,
755
+ "learning_rate": 7.972889500639582e-06,
756
+ "loss": 0.3518,
757
+ "step": 52500
758
+ },
759
+ {
760
+ "epoch": 1.2244426475684418,
761
+ "grad_norm": 73.71373748779297,
762
+ "learning_rate": 7.948570789336732e-06,
763
+ "loss": 0.3523,
764
+ "step": 53000
765
+ },
766
+ {
767
+ "epoch": 1.2359939933002195,
768
+ "grad_norm": 14.03942584991455,
769
+ "learning_rate": 7.924252078033881e-06,
770
+ "loss": 0.3512,
771
+ "step": 53500
772
+ },
773
+ {
774
+ "epoch": 1.2475453390319973,
775
+ "grad_norm": 53.254791259765625,
776
+ "learning_rate": 7.899933366731031e-06,
777
+ "loss": 0.3454,
778
+ "step": 54000
779
+ },
780
+ {
781
+ "epoch": 1.259096684763775,
782
+ "grad_norm": 4.009112358093262,
783
+ "learning_rate": 7.87561465542818e-06,
784
+ "loss": 0.3406,
785
+ "step": 54500
786
+ },
787
+ {
788
+ "epoch": 1.2706480304955527,
789
+ "grad_norm": 37.73891067504883,
790
+ "learning_rate": 7.85129594412533e-06,
791
+ "loss": 0.3474,
792
+ "step": 55000
793
+ },
794
+ {
795
+ "epoch": 1.2821993762273305,
796
+ "grad_norm": 84.36405181884766,
797
+ "learning_rate": 7.82697723282248e-06,
798
+ "loss": 0.3551,
799
+ "step": 55500
800
+ },
801
+ {
802
+ "epoch": 1.2937507219591082,
803
+ "grad_norm": 43.2830810546875,
804
+ "learning_rate": 7.802658521519628e-06,
805
+ "loss": 0.3521,
806
+ "step": 56000
807
+ },
808
+ {
809
+ "epoch": 1.305302067690886,
810
+ "grad_norm": 143.8572998046875,
811
+ "learning_rate": 7.778339810216777e-06,
812
+ "loss": 0.3544,
813
+ "step": 56500
814
+ },
815
+ {
816
+ "epoch": 1.3168534134226637,
817
+ "grad_norm": 19.368745803833008,
818
+ "learning_rate": 7.754021098913927e-06,
819
+ "loss": 0.332,
820
+ "step": 57000
821
+ },
822
+ {
823
+ "epoch": 1.3284047591544415,
824
+ "grad_norm": 37.57652282714844,
825
+ "learning_rate": 7.729702387611076e-06,
826
+ "loss": 0.3614,
827
+ "step": 57500
828
+ },
829
+ {
830
+ "epoch": 1.3399561048862192,
831
+ "grad_norm": 12.710687637329102,
832
+ "learning_rate": 7.705383676308226e-06,
833
+ "loss": 0.3397,
834
+ "step": 58000
835
+ },
836
+ {
837
+ "epoch": 1.351507450617997,
838
+ "grad_norm": 9.734477043151855,
839
+ "learning_rate": 7.681064965005375e-06,
840
+ "loss": 0.3337,
841
+ "step": 58500
842
+ },
843
+ {
844
+ "epoch": 1.3630587963497747,
845
+ "grad_norm": 50.98896408081055,
846
+ "learning_rate": 7.656746253702524e-06,
847
+ "loss": 0.3466,
848
+ "step": 59000
849
+ },
850
+ {
851
+ "epoch": 1.3746101420815524,
852
+ "grad_norm": 36.90542221069336,
853
+ "learning_rate": 7.632427542399674e-06,
854
+ "loss": 0.3345,
855
+ "step": 59500
856
+ },
857
+ {
858
+ "epoch": 1.3861614878133302,
859
+ "grad_norm": 87.90083312988281,
860
+ "learning_rate": 7.608108831096823e-06,
861
+ "loss": 0.3268,
862
+ "step": 60000
863
+ },
864
+ {
865
+ "epoch": 1.397712833545108,
866
+ "grad_norm": 33.43013000488281,
867
+ "learning_rate": 7.583790119793973e-06,
868
+ "loss": 0.3572,
869
+ "step": 60500
870
+ },
871
+ {
872
+ "epoch": 1.4092641792768856,
873
+ "grad_norm": 10.33069896697998,
874
+ "learning_rate": 7.559471408491121e-06,
875
+ "loss": 0.3565,
876
+ "step": 61000
877
+ },
878
+ {
879
+ "epoch": 1.4208155250086634,
880
+ "grad_norm": 1.686011791229248,
881
+ "learning_rate": 7.535152697188272e-06,
882
+ "loss": 0.3647,
883
+ "step": 61500
884
+ },
885
+ {
886
+ "epoch": 1.4323668707404413,
887
+ "grad_norm": 20.76148223876953,
888
+ "learning_rate": 7.5108339858854206e-06,
889
+ "loss": 0.3284,
890
+ "step": 62000
891
+ },
892
+ {
893
+ "epoch": 1.443918216472219,
894
+ "grad_norm": 9.261433601379395,
895
+ "learning_rate": 7.48651527458257e-06,
896
+ "loss": 0.3474,
897
+ "step": 62500
898
+ },
899
+ {
900
+ "epoch": 1.4554695622039968,
901
+ "grad_norm": 38.97019577026367,
902
+ "learning_rate": 7.462196563279719e-06,
903
+ "loss": 0.3433,
904
+ "step": 63000
905
+ },
906
+ {
907
+ "epoch": 1.4670209079357746,
908
+ "grad_norm": 43.138450622558594,
909
+ "learning_rate": 7.437877851976869e-06,
910
+ "loss": 0.3406,
911
+ "step": 63500
912
+ },
913
+ {
914
+ "epoch": 1.4785722536675523,
915
+ "grad_norm": 10.66066837310791,
916
+ "learning_rate": 7.413559140674018e-06,
917
+ "loss": 0.3533,
918
+ "step": 64000
919
+ },
920
+ {
921
+ "epoch": 1.49012359939933,
922
+ "grad_norm": 32.289588928222656,
923
+ "learning_rate": 7.389240429371168e-06,
924
+ "loss": 0.3377,
925
+ "step": 64500
926
+ },
927
+ {
928
+ "epoch": 1.5016749451311078,
929
+ "grad_norm": 85.28985595703125,
930
+ "learning_rate": 7.364921718068316e-06,
931
+ "loss": 0.3598,
932
+ "step": 65000
933
+ },
934
+ {
935
+ "epoch": 1.5132262908628855,
936
+ "grad_norm": 56.37942123413086,
937
+ "learning_rate": 7.340603006765467e-06,
938
+ "loss": 0.3354,
939
+ "step": 65500
940
+ },
941
+ {
942
+ "epoch": 1.5247776365946633,
943
+ "grad_norm": 91.6646499633789,
944
+ "learning_rate": 7.3162842954626156e-06,
945
+ "loss": 0.3557,
946
+ "step": 66000
947
+ },
948
+ {
949
+ "epoch": 1.536328982326441,
950
+ "grad_norm": 22.282794952392578,
951
+ "learning_rate": 7.291965584159765e-06,
952
+ "loss": 0.3375,
953
+ "step": 66500
954
+ },
955
+ {
956
+ "epoch": 1.5478803280582187,
957
+ "grad_norm": 23.59638214111328,
958
+ "learning_rate": 7.267646872856914e-06,
959
+ "loss": 0.3613,
960
+ "step": 67000
961
+ },
962
+ {
963
+ "epoch": 1.5594316737899967,
964
+ "grad_norm": 101.47071838378906,
965
+ "learning_rate": 7.243328161554064e-06,
966
+ "loss": 0.3625,
967
+ "step": 67500
968
+ },
969
+ {
970
+ "epoch": 1.5709830195217744,
971
+ "grad_norm": 10.680140495300293,
972
+ "learning_rate": 7.219009450251213e-06,
973
+ "loss": 0.3447,
974
+ "step": 68000
975
+ },
976
+ {
977
+ "epoch": 1.5825343652535522,
978
+ "grad_norm": 3.526810646057129,
979
+ "learning_rate": 7.194690738948363e-06,
980
+ "loss": 0.3752,
981
+ "step": 68500
982
+ },
983
+ {
984
+ "epoch": 1.59408571098533,
985
+ "grad_norm": 63.79883575439453,
986
+ "learning_rate": 7.170372027645511e-06,
987
+ "loss": 0.3556,
988
+ "step": 69000
989
+ },
990
+ {
991
+ "epoch": 1.6056370567171077,
992
+ "grad_norm": 57.77899169921875,
993
+ "learning_rate": 7.14605331634266e-06,
994
+ "loss": 0.3734,
995
+ "step": 69500
996
+ },
997
+ {
998
+ "epoch": 1.6171884024488854,
999
+ "grad_norm": 12.811071395874023,
1000
+ "learning_rate": 7.1217346050398105e-06,
1001
+ "loss": 0.3314,
1002
+ "step": 70000
1003
+ },
1004
+ {
1005
+ "epoch": 1.6287397481806631,
1006
+ "grad_norm": 45.64273452758789,
1007
+ "learning_rate": 7.09741589373696e-06,
1008
+ "loss": 0.3603,
1009
+ "step": 70500
1010
+ },
1011
+ {
1012
+ "epoch": 1.6402910939124409,
1013
+ "grad_norm": 0.7724096775054932,
1014
+ "learning_rate": 7.073097182434109e-06,
1015
+ "loss": 0.3531,
1016
+ "step": 71000
1017
+ },
1018
+ {
1019
+ "epoch": 1.6518424396442186,
1020
+ "grad_norm": 111.21202850341797,
1021
+ "learning_rate": 7.048778471131258e-06,
1022
+ "loss": 0.3687,
1023
+ "step": 71500
1024
+ },
1025
+ {
1026
+ "epoch": 1.6633937853759964,
1027
+ "grad_norm": 2.9768288135528564,
1028
+ "learning_rate": 7.024459759828408e-06,
1029
+ "loss": 0.362,
1030
+ "step": 72000
1031
+ },
1032
+ {
1033
+ "epoch": 1.674945131107774,
1034
+ "grad_norm": 27.48809242248535,
1035
+ "learning_rate": 7.000141048525558e-06,
1036
+ "loss": 0.3616,
1037
+ "step": 72500
1038
+ },
1039
+ {
1040
+ "epoch": 1.6864964768395518,
1041
+ "grad_norm": 48.34687805175781,
1042
+ "learning_rate": 6.975822337222706e-06,
1043
+ "loss": 0.3367,
1044
+ "step": 73000
1045
+ },
1046
+ {
1047
+ "epoch": 1.6980478225713296,
1048
+ "grad_norm": 39.69007110595703,
1049
+ "learning_rate": 6.951503625919855e-06,
1050
+ "loss": 0.3621,
1051
+ "step": 73500
1052
+ },
1053
+ {
1054
+ "epoch": 1.7095991683031073,
1055
+ "grad_norm": 110.33565521240234,
1056
+ "learning_rate": 6.9271849146170055e-06,
1057
+ "loss": 0.3539,
1058
+ "step": 74000
1059
+ },
1060
+ {
1061
+ "epoch": 1.721150514034885,
1062
+ "grad_norm": 41.324222564697266,
1063
+ "learning_rate": 6.902866203314154e-06,
1064
+ "loss": 0.3433,
1065
+ "step": 74500
1066
+ },
1067
+ {
1068
+ "epoch": 1.7327018597666628,
1069
+ "grad_norm": 73.07483673095703,
1070
+ "learning_rate": 6.878547492011304e-06,
1071
+ "loss": 0.3581,
1072
+ "step": 75000
1073
+ },
1074
+ {
1075
+ "epoch": 1.7442532054984405,
1076
+ "grad_norm": 2.784154176712036,
1077
+ "learning_rate": 6.854228780708453e-06,
1078
+ "loss": 0.3506,
1079
+ "step": 75500
1080
+ },
1081
+ {
1082
+ "epoch": 1.7558045512302183,
1083
+ "grad_norm": 46.016876220703125,
1084
+ "learning_rate": 6.829910069405603e-06,
1085
+ "loss": 0.3687,
1086
+ "step": 76000
1087
+ },
1088
+ {
1089
+ "epoch": 1.767355896961996,
1090
+ "grad_norm": 10.431674003601074,
1091
+ "learning_rate": 6.805591358102752e-06,
1092
+ "loss": 0.3448,
1093
+ "step": 76500
1094
+ },
1095
+ {
1096
+ "epoch": 1.7789072426937738,
1097
+ "grad_norm": 77.33363342285156,
1098
+ "learning_rate": 6.781272646799901e-06,
1099
+ "loss": 0.3688,
1100
+ "step": 77000
1101
+ },
1102
+ {
1103
+ "epoch": 1.7904585884255515,
1104
+ "grad_norm": 36.162261962890625,
1105
+ "learning_rate": 6.75695393549705e-06,
1106
+ "loss": 0.3481,
1107
+ "step": 77500
1108
+ },
1109
+ {
1110
+ "epoch": 1.8020099341573292,
1111
+ "grad_norm": 34.184844970703125,
1112
+ "learning_rate": 6.7326352241942005e-06,
1113
+ "loss": 0.3538,
1114
+ "step": 78000
1115
+ },
1116
+ {
1117
+ "epoch": 1.813561279889107,
1118
+ "grad_norm": 89.07086944580078,
1119
+ "learning_rate": 6.708316512891349e-06,
1120
+ "loss": 0.3735,
1121
+ "step": 78500
1122
+ },
1123
+ {
1124
+ "epoch": 1.8251126256208847,
1125
+ "grad_norm": 3.9982004165649414,
1126
+ "learning_rate": 6.683997801588499e-06,
1127
+ "loss": 0.3394,
1128
+ "step": 79000
1129
+ },
1130
+ {
1131
+ "epoch": 1.8366639713526625,
1132
+ "grad_norm": 81.374267578125,
1133
+ "learning_rate": 6.659679090285648e-06,
1134
+ "loss": 0.3174,
1135
+ "step": 79500
1136
+ },
1137
+ {
1138
+ "epoch": 1.8482153170844402,
1139
+ "grad_norm": 0.20297826826572418,
1140
+ "learning_rate": 6.635360378982797e-06,
1141
+ "loss": 0.3153,
1142
+ "step": 80000
1143
+ },
1144
+ {
1145
+ "epoch": 1.859766662816218,
1146
+ "grad_norm": 59.6692008972168,
1147
+ "learning_rate": 6.611041667679947e-06,
1148
+ "loss": 0.3545,
1149
+ "step": 80500
1150
+ },
1151
+ {
1152
+ "epoch": 1.8713180085479957,
1153
+ "grad_norm": 1.2928632497787476,
1154
+ "learning_rate": 6.586722956377096e-06,
1155
+ "loss": 0.3373,
1156
+ "step": 81000
1157
+ },
1158
+ {
1159
+ "epoch": 1.8828693542797736,
1160
+ "grad_norm": 94.39313507080078,
1161
+ "learning_rate": 6.562404245074245e-06,
1162
+ "loss": 0.3222,
1163
+ "step": 81500
1164
+ },
1165
+ {
1166
+ "epoch": 1.8944207000115514,
1167
+ "grad_norm": 29.352657318115234,
1168
+ "learning_rate": 6.538085533771395e-06,
1169
+ "loss": 0.3418,
1170
+ "step": 82000
1171
+ },
1172
+ {
1173
+ "epoch": 1.9059720457433291,
1174
+ "grad_norm": 11.170536041259766,
1175
+ "learning_rate": 6.513766822468544e-06,
1176
+ "loss": 0.3423,
1177
+ "step": 82500
1178
+ },
1179
+ {
1180
+ "epoch": 1.9175233914751069,
1181
+ "grad_norm": 90.75904846191406,
1182
+ "learning_rate": 6.489448111165694e-06,
1183
+ "loss": 0.3253,
1184
+ "step": 83000
1185
+ },
1186
+ {
1187
+ "epoch": 1.9290747372068846,
1188
+ "grad_norm": 79.5523681640625,
1189
+ "learning_rate": 6.4651293998628426e-06,
1190
+ "loss": 0.3423,
1191
+ "step": 83500
1192
+ },
1193
+ {
1194
+ "epoch": 1.9406260829386623,
1195
+ "grad_norm": 2.362600564956665,
1196
+ "learning_rate": 6.440810688559992e-06,
1197
+ "loss": 0.3233,
1198
+ "step": 84000
1199
+ },
1200
+ {
1201
+ "epoch": 1.95217742867044,
1202
+ "grad_norm": 106.5050277709961,
1203
+ "learning_rate": 6.416491977257142e-06,
1204
+ "loss": 0.3291,
1205
+ "step": 84500
1206
+ },
1207
+ {
1208
+ "epoch": 1.9637287744022178,
1209
+ "grad_norm": 44.49625015258789,
1210
+ "learning_rate": 6.392173265954291e-06,
1211
+ "loss": 0.3462,
1212
+ "step": 85000
1213
+ },
1214
+ {
1215
+ "epoch": 1.9752801201339956,
1216
+ "grad_norm": 65.74842071533203,
1217
+ "learning_rate": 6.36785455465144e-06,
1218
+ "loss": 0.3356,
1219
+ "step": 85500
1220
+ },
1221
+ {
1222
+ "epoch": 1.9868314658657735,
1223
+ "grad_norm": 25.529869079589844,
1224
+ "learning_rate": 6.34353584334859e-06,
1225
+ "loss": 0.3418,
1226
+ "step": 86000
1227
+ },
1228
+ {
1229
+ "epoch": 1.9983828115975513,
1230
+ "grad_norm": 0.7819985151290894,
1231
+ "learning_rate": 6.319217132045739e-06,
1232
+ "loss": 0.3299,
1233
+ "step": 86500
1234
+ },
1235
+ {
1236
+ "epoch": 2.0,
1237
+ "eval_accuracy": 0.8750144391821647,
1238
+ "eval_f1": 0.883420247381804,
1239
+ "eval_loss": 0.5118392705917358,
1240
+ "eval_matthews_correlation": 0.7549403916374646,
1241
+ "eval_precision": 0.8371656115989381,
1242
+ "eval_recall": 0.9350850782354819,
1243
+ "eval_runtime": 543.2127,
1244
+ "eval_samples_per_second": 79.683,
1245
+ "eval_steps_per_second": 4.981,
1246
+ "step": 86570
1247
+ },
1248
+ {
1249
+ "epoch": 2.009934157329329,
1250
+ "grad_norm": 0.09307216852903366,
1251
+ "learning_rate": 6.294898420742889e-06,
1252
+ "loss": 0.2056,
1253
+ "step": 87000
1254
+ },
1255
+ {
1256
+ "epoch": 2.0214855030611067,
1257
+ "grad_norm": 0.1265561431646347,
1258
+ "learning_rate": 6.2705797094400376e-06,
1259
+ "loss": 0.201,
1260
+ "step": 87500
1261
+ },
1262
+ {
1263
+ "epoch": 2.0330368487928845,
1264
+ "grad_norm": 0.5366824865341187,
1265
+ "learning_rate": 6.246260998137187e-06,
1266
+ "loss": 0.1897,
1267
+ "step": 88000
1268
+ },
1269
+ {
1270
+ "epoch": 2.0445881945246622,
1271
+ "grad_norm": 37.82719421386719,
1272
+ "learning_rate": 6.221942286834337e-06,
1273
+ "loss": 0.2095,
1274
+ "step": 88500
1275
+ },
1276
+ {
1277
+ "epoch": 2.05613954025644,
1278
+ "grad_norm": 0.5017096996307373,
1279
+ "learning_rate": 6.197623575531486e-06,
1280
+ "loss": 0.1988,
1281
+ "step": 89000
1282
+ },
1283
+ {
1284
+ "epoch": 2.0676908859882177,
1285
+ "grad_norm": 0.10803969949483871,
1286
+ "learning_rate": 6.173304864228635e-06,
1287
+ "loss": 0.1991,
1288
+ "step": 89500
1289
+ },
1290
+ {
1291
+ "epoch": 2.0792422317199954,
1292
+ "grad_norm": 0.3513072431087494,
1293
+ "learning_rate": 6.148986152925785e-06,
1294
+ "loss": 0.2018,
1295
+ "step": 90000
1296
+ },
1297
+ {
1298
+ "epoch": 2.090793577451773,
1299
+ "grad_norm": 1.3080644607543945,
1300
+ "learning_rate": 6.124667441622933e-06,
1301
+ "loss": 0.2095,
1302
+ "step": 90500
1303
+ },
1304
+ {
1305
+ "epoch": 2.102344923183551,
1306
+ "grad_norm": 5.2385053634643555,
1307
+ "learning_rate": 6.100348730320084e-06,
1308
+ "loss": 0.1974,
1309
+ "step": 91000
1310
+ },
1311
+ {
1312
+ "epoch": 2.1138962689153287,
1313
+ "grad_norm": 10.532730102539062,
1314
+ "learning_rate": 6.0760300190172326e-06,
1315
+ "loss": 0.2019,
1316
+ "step": 91500
1317
+ },
1318
+ {
1319
+ "epoch": 2.1254476146471064,
1320
+ "grad_norm": 0.104009710252285,
1321
+ "learning_rate": 6.051711307714382e-06,
1322
+ "loss": 0.2091,
1323
+ "step": 92000
1324
+ },
1325
+ {
1326
+ "epoch": 2.136998960378884,
1327
+ "grad_norm": 0.038262490183115005,
1328
+ "learning_rate": 6.027392596411531e-06,
1329
+ "loss": 0.2422,
1330
+ "step": 92500
1331
+ },
1332
+ {
1333
+ "epoch": 2.148550306110662,
1334
+ "grad_norm": 0.30325543880462646,
1335
+ "learning_rate": 6.003073885108681e-06,
1336
+ "loss": 0.1971,
1337
+ "step": 93000
1338
+ },
1339
+ {
1340
+ "epoch": 2.1601016518424396,
1341
+ "grad_norm": 0.2501794099807739,
1342
+ "learning_rate": 5.97875517380583e-06,
1343
+ "loss": 0.2123,
1344
+ "step": 93500
1345
+ },
1346
+ {
1347
+ "epoch": 2.1716529975742174,
1348
+ "grad_norm": 0.9724676609039307,
1349
+ "learning_rate": 5.95443646250298e-06,
1350
+ "loss": 0.2222,
1351
+ "step": 94000
1352
+ },
1353
+ {
1354
+ "epoch": 2.183204343305995,
1355
+ "grad_norm": 23.507680892944336,
1356
+ "learning_rate": 5.930117751200128e-06,
1357
+ "loss": 0.199,
1358
+ "step": 94500
1359
+ },
1360
+ {
1361
+ "epoch": 2.194755689037773,
1362
+ "grad_norm": 23.95125389099121,
1363
+ "learning_rate": 5.905799039897279e-06,
1364
+ "loss": 0.2367,
1365
+ "step": 95000
1366
+ },
1367
+ {
1368
+ "epoch": 2.2063070347695506,
1369
+ "grad_norm": 0.026822537183761597,
1370
+ "learning_rate": 5.8814803285944275e-06,
1371
+ "loss": 0.1918,
1372
+ "step": 95500
1373
+ },
1374
+ {
1375
+ "epoch": 2.2178583805013283,
1376
+ "grad_norm": 0.11111404001712799,
1377
+ "learning_rate": 5.857161617291577e-06,
1378
+ "loss": 0.2047,
1379
+ "step": 96000
1380
+ },
1381
+ {
1382
+ "epoch": 2.229409726233106,
1383
+ "grad_norm": 0.1919553279876709,
1384
+ "learning_rate": 5.832842905988726e-06,
1385
+ "loss": 0.225,
1386
+ "step": 96500
1387
+ },
1388
+ {
1389
+ "epoch": 2.240961071964884,
1390
+ "grad_norm": 0.44445013999938965,
1391
+ "learning_rate": 5.808524194685876e-06,
1392
+ "loss": 0.2373,
1393
+ "step": 97000
1394
+ },
1395
+ {
1396
+ "epoch": 2.2525124176966616,
1397
+ "grad_norm": 0.0314050130546093,
1398
+ "learning_rate": 5.784205483383025e-06,
1399
+ "loss": 0.2313,
1400
+ "step": 97500
1401
+ },
1402
+ {
1403
+ "epoch": 2.2640637634284393,
1404
+ "grad_norm": 0.3491435945034027,
1405
+ "learning_rate": 5.759886772080175e-06,
1406
+ "loss": 0.2149,
1407
+ "step": 98000
1408
+ },
1409
+ {
1410
+ "epoch": 2.275615109160217,
1411
+ "grad_norm": 0.41699114441871643,
1412
+ "learning_rate": 5.735568060777323e-06,
1413
+ "loss": 0.2012,
1414
+ "step": 98500
1415
+ },
1416
+ {
1417
+ "epoch": 2.2871664548919948,
1418
+ "grad_norm": 0.05766846239566803,
1419
+ "learning_rate": 5.711249349474474e-06,
1420
+ "loss": 0.2247,
1421
+ "step": 99000
1422
+ },
1423
+ {
1424
+ "epoch": 2.2987178006237725,
1425
+ "grad_norm": 10.02811336517334,
1426
+ "learning_rate": 5.6869306381716225e-06,
1427
+ "loss": 0.2081,
1428
+ "step": 99500
1429
+ },
1430
+ {
1431
+ "epoch": 2.3102691463555503,
1432
+ "grad_norm": 0.1262376755475998,
1433
+ "learning_rate": 5.662611926868772e-06,
1434
+ "loss": 0.2471,
1435
+ "step": 100000
1436
+ },
1437
+ {
1438
+ "epoch": 2.321820492087328,
1439
+ "grad_norm": 7.784575939178467,
1440
+ "learning_rate": 5.638293215565921e-06,
1441
+ "loss": 0.2188,
1442
+ "step": 100500
1443
+ },
1444
+ {
1445
+ "epoch": 2.3333718378191057,
1446
+ "grad_norm": 0.4924183189868927,
1447
+ "learning_rate": 5.61397450426307e-06,
1448
+ "loss": 0.182,
1449
+ "step": 101000
1450
+ },
1451
+ {
1452
+ "epoch": 2.3449231835508835,
1453
+ "grad_norm": 0.8090869188308716,
1454
+ "learning_rate": 5.58965579296022e-06,
1455
+ "loss": 0.2307,
1456
+ "step": 101500
1457
+ },
1458
+ {
1459
+ "epoch": 2.3564745292826617,
1460
+ "grad_norm": 0.5244800448417664,
1461
+ "learning_rate": 5.56533708165737e-06,
1462
+ "loss": 0.2162,
1463
+ "step": 102000
1464
+ },
1465
+ {
1466
+ "epoch": 2.368025875014439,
1467
+ "grad_norm": 0.12117698788642883,
1468
+ "learning_rate": 5.541018370354518e-06,
1469
+ "loss": 0.2339,
1470
+ "step": 102500
1471
+ },
1472
+ {
1473
+ "epoch": 2.379577220746217,
1474
+ "grad_norm": 139.68356323242188,
1475
+ "learning_rate": 5.516699659051667e-06,
1476
+ "loss": 0.2306,
1477
+ "step": 103000
1478
+ },
1479
+ {
1480
+ "epoch": 2.391128566477995,
1481
+ "grad_norm": 0.34436774253845215,
1482
+ "learning_rate": 5.4923809477488175e-06,
1483
+ "loss": 0.2289,
1484
+ "step": 103500
1485
+ },
1486
+ {
1487
+ "epoch": 2.4026799122097726,
1488
+ "grad_norm": 0.06684353947639465,
1489
+ "learning_rate": 5.468062236445967e-06,
1490
+ "loss": 0.2388,
1491
+ "step": 104000
1492
+ },
1493
+ {
1494
+ "epoch": 2.4142312579415504,
1495
+ "grad_norm": 38.31502914428711,
1496
+ "learning_rate": 5.443743525143116e-06,
1497
+ "loss": 0.2185,
1498
+ "step": 104500
1499
+ },
1500
+ {
1501
+ "epoch": 2.425782603673328,
1502
+ "grad_norm": 0.47605860233306885,
1503
+ "learning_rate": 5.419424813840265e-06,
1504
+ "loss": 0.2536,
1505
+ "step": 105000
1506
+ },
1507
+ {
1508
+ "epoch": 2.437333949405106,
1509
+ "grad_norm": 2.486711025238037,
1510
+ "learning_rate": 5.395106102537415e-06,
1511
+ "loss": 0.224,
1512
+ "step": 105500
1513
+ },
1514
+ {
1515
+ "epoch": 2.4488852951368836,
1516
+ "grad_norm": 0.19100871682167053,
1517
+ "learning_rate": 5.370787391234565e-06,
1518
+ "loss": 0.2429,
1519
+ "step": 106000
1520
+ },
1521
+ {
1522
+ "epoch": 2.4604366408686613,
1523
+ "grad_norm": 0.6943984627723694,
1524
+ "learning_rate": 5.346468679931713e-06,
1525
+ "loss": 0.2051,
1526
+ "step": 106500
1527
+ },
1528
+ {
1529
+ "epoch": 2.471987986600439,
1530
+ "grad_norm": 100.54878234863281,
1531
+ "learning_rate": 5.322149968628862e-06,
1532
+ "loss": 0.2177,
1533
+ "step": 107000
1534
+ },
1535
+ {
1536
+ "epoch": 2.483539332332217,
1537
+ "grad_norm": 1.6368072032928467,
1538
+ "learning_rate": 5.2978312573260125e-06,
1539
+ "loss": 0.2182,
1540
+ "step": 107500
1541
+ },
1542
+ {
1543
+ "epoch": 2.4950906780639945,
1544
+ "grad_norm": 0.33522534370422363,
1545
+ "learning_rate": 5.273512546023162e-06,
1546
+ "loss": 0.2011,
1547
+ "step": 108000
1548
+ },
1549
+ {
1550
+ "epoch": 2.5066420237957723,
1551
+ "grad_norm": 1.7450274229049683,
1552
+ "learning_rate": 5.249193834720311e-06,
1553
+ "loss": 0.2316,
1554
+ "step": 108500
1555
+ },
1556
+ {
1557
+ "epoch": 2.51819336952755,
1558
+ "grad_norm": 0.29246997833251953,
1559
+ "learning_rate": 5.2248751234174596e-06,
1560
+ "loss": 0.2311,
1561
+ "step": 109000
1562
+ },
1563
+ {
1564
+ "epoch": 2.5297447152593278,
1565
+ "grad_norm": 0.0980878472328186,
1566
+ "learning_rate": 5.20055641211461e-06,
1567
+ "loss": 0.2224,
1568
+ "step": 109500
1569
+ },
1570
+ {
1571
+ "epoch": 2.5412960609911055,
1572
+ "grad_norm": 19.347719192504883,
1573
+ "learning_rate": 5.17623770081176e-06,
1574
+ "loss": 0.2156,
1575
+ "step": 110000
1576
+ },
1577
+ {
1578
+ "epoch": 2.5528474067228832,
1579
+ "grad_norm": 0.4555191099643707,
1580
+ "learning_rate": 5.151918989508908e-06,
1581
+ "loss": 0.2192,
1582
+ "step": 110500
1583
+ },
1584
+ {
1585
+ "epoch": 2.564398752454661,
1586
+ "grad_norm": 8.327524185180664,
1587
+ "learning_rate": 5.127600278206057e-06,
1588
+ "loss": 0.2331,
1589
+ "step": 111000
1590
+ },
1591
+ {
1592
+ "epoch": 2.5759500981864387,
1593
+ "grad_norm": 0.23247091472148895,
1594
+ "learning_rate": 5.103281566903207e-06,
1595
+ "loss": 0.2068,
1596
+ "step": 111500
1597
+ },
1598
+ {
1599
+ "epoch": 2.5875014439182165,
1600
+ "grad_norm": 0.5253803730010986,
1601
+ "learning_rate": 5.078962855600357e-06,
1602
+ "loss": 0.2199,
1603
+ "step": 112000
1604
+ },
1605
+ {
1606
+ "epoch": 2.599052789649994,
1607
+ "grad_norm": 0.16480697691440582,
1608
+ "learning_rate": 5.054644144297506e-06,
1609
+ "loss": 0.2303,
1610
+ "step": 112500
1611
+ },
1612
+ {
1613
+ "epoch": 2.610604135381772,
1614
+ "grad_norm": 13.176816940307617,
1615
+ "learning_rate": 5.0303254329946546e-06,
1616
+ "loss": 0.2463,
1617
+ "step": 113000
1618
+ },
1619
+ {
1620
+ "epoch": 2.6221554811135497,
1621
+ "grad_norm": 0.30696120858192444,
1622
+ "learning_rate": 5.006006721691804e-06,
1623
+ "loss": 0.229,
1624
+ "step": 113500
1625
+ },
1626
+ {
1627
+ "epoch": 2.6337068268453274,
1628
+ "grad_norm": 10.747370719909668,
1629
+ "learning_rate": 4.981688010388954e-06,
1630
+ "loss": 0.239,
1631
+ "step": 114000
1632
+ },
1633
+ {
1634
+ "epoch": 2.645258172577105,
1635
+ "grad_norm": 10.800712585449219,
1636
+ "learning_rate": 4.957369299086103e-06,
1637
+ "loss": 0.2464,
1638
+ "step": 114500
1639
+ },
1640
+ {
1641
+ "epoch": 2.656809518308883,
1642
+ "grad_norm": 137.77926635742188,
1643
+ "learning_rate": 4.933050587783252e-06,
1644
+ "loss": 0.2082,
1645
+ "step": 115000
1646
+ },
1647
+ {
1648
+ "epoch": 2.6683608640406606,
1649
+ "grad_norm": 47.672916412353516,
1650
+ "learning_rate": 4.908731876480402e-06,
1651
+ "loss": 0.2169,
1652
+ "step": 115500
1653
+ },
1654
+ {
1655
+ "epoch": 2.6799122097724384,
1656
+ "grad_norm": 13.38288402557373,
1657
+ "learning_rate": 4.884413165177551e-06,
1658
+ "loss": 0.2316,
1659
+ "step": 116000
1660
+ },
1661
+ {
1662
+ "epoch": 2.691463555504216,
1663
+ "grad_norm": 1.015547275543213,
1664
+ "learning_rate": 4.860094453874701e-06,
1665
+ "loss": 0.2129,
1666
+ "step": 116500
1667
+ },
1668
+ {
1669
+ "epoch": 2.703014901235994,
1670
+ "grad_norm": 0.6633150577545166,
1671
+ "learning_rate": 4.8357757425718495e-06,
1672
+ "loss": 0.2346,
1673
+ "step": 117000
1674
+ },
1675
+ {
1676
+ "epoch": 2.7145662469677716,
1677
+ "grad_norm": 0.4231832027435303,
1678
+ "learning_rate": 4.811457031268999e-06,
1679
+ "loss": 0.2465,
1680
+ "step": 117500
1681
+ },
1682
+ {
1683
+ "epoch": 2.7261175926995493,
1684
+ "grad_norm": 0.1299211084842682,
1685
+ "learning_rate": 4.787138319966149e-06,
1686
+ "loss": 0.2395,
1687
+ "step": 118000
1688
+ },
1689
+ {
1690
+ "epoch": 2.7376689384313275,
1691
+ "grad_norm": 0.05142102763056755,
1692
+ "learning_rate": 4.762819608663298e-06,
1693
+ "loss": 0.2327,
1694
+ "step": 118500
1695
+ },
1696
+ {
1697
+ "epoch": 2.749220284163105,
1698
+ "grad_norm": 0.11573930829763412,
1699
+ "learning_rate": 4.738500897360447e-06,
1700
+ "loss": 0.2035,
1701
+ "step": 119000
1702
+ },
1703
+ {
1704
+ "epoch": 2.760771629894883,
1705
+ "grad_norm": 0.5103738307952881,
1706
+ "learning_rate": 4.714182186057597e-06,
1707
+ "loss": 0.2195,
1708
+ "step": 119500
1709
+ },
1710
+ {
1711
+ "epoch": 2.7723229756266603,
1712
+ "grad_norm": 0.45950281620025635,
1713
+ "learning_rate": 4.689863474754746e-06,
1714
+ "loss": 0.2523,
1715
+ "step": 120000
1716
+ },
1717
+ {
1718
+ "epoch": 2.7838743213584385,
1719
+ "grad_norm": 0.3899378180503845,
1720
+ "learning_rate": 4.665544763451896e-06,
1721
+ "loss": 0.2071,
1722
+ "step": 120500
1723
+ },
1724
+ {
1725
+ "epoch": 2.795425667090216,
1726
+ "grad_norm": 6.975862503051758,
1727
+ "learning_rate": 4.6412260521490445e-06,
1728
+ "loss": 0.2541,
1729
+ "step": 121000
1730
+ },
1731
+ {
1732
+ "epoch": 2.806977012821994,
1733
+ "grad_norm": 34.19971466064453,
1734
+ "learning_rate": 4.616907340846194e-06,
1735
+ "loss": 0.2136,
1736
+ "step": 121500
1737
+ },
1738
+ {
1739
+ "epoch": 2.8185283585537713,
1740
+ "grad_norm": 0.10601510852575302,
1741
+ "learning_rate": 4.592588629543344e-06,
1742
+ "loss": 0.2143,
1743
+ "step": 122000
1744
+ },
1745
+ {
1746
+ "epoch": 2.8300797042855494,
1747
+ "grad_norm": 7.919482231140137,
1748
+ "learning_rate": 4.5682699182404924e-06,
1749
+ "loss": 0.2346,
1750
+ "step": 122500
1751
+ },
1752
+ {
1753
+ "epoch": 2.8416310500173267,
1754
+ "grad_norm": 0.47603341937065125,
1755
+ "learning_rate": 4.543951206937642e-06,
1756
+ "loss": 0.2185,
1757
+ "step": 123000
1758
+ },
1759
+ {
1760
+ "epoch": 2.853182395749105,
1761
+ "grad_norm": 26.099021911621094,
1762
+ "learning_rate": 4.519632495634792e-06,
1763
+ "loss": 0.2307,
1764
+ "step": 123500
1765
+ },
1766
+ {
1767
+ "epoch": 2.8647337414808827,
1768
+ "grad_norm": 0.5455193519592285,
1769
+ "learning_rate": 4.495313784331941e-06,
1770
+ "loss": 0.2057,
1771
+ "step": 124000
1772
+ },
1773
+ {
1774
+ "epoch": 2.8762850872126604,
1775
+ "grad_norm": 2.6065433025360107,
1776
+ "learning_rate": 4.47099507302909e-06,
1777
+ "loss": 0.2273,
1778
+ "step": 124500
1779
+ },
1780
+ {
1781
+ "epoch": 2.887836432944438,
1782
+ "grad_norm": 0.44194814562797546,
1783
+ "learning_rate": 4.4466763617262395e-06,
1784
+ "loss": 0.2237,
1785
+ "step": 125000
1786
+ },
1787
+ {
1788
+ "epoch": 2.899387778676216,
1789
+ "grad_norm": 2.659928321838379,
1790
+ "learning_rate": 4.422357650423389e-06,
1791
+ "loss": 0.2162,
1792
+ "step": 125500
1793
+ },
1794
+ {
1795
+ "epoch": 2.9109391244079936,
1796
+ "grad_norm": 1.6970124244689941,
1797
+ "learning_rate": 4.398038939120539e-06,
1798
+ "loss": 0.2316,
1799
+ "step": 126000
1800
+ },
1801
+ {
1802
+ "epoch": 2.9224904701397714,
1803
+ "grad_norm": 103.99176025390625,
1804
+ "learning_rate": 4.3737202278176874e-06,
1805
+ "loss": 0.2209,
1806
+ "step": 126500
1807
+ },
1808
+ {
1809
+ "epoch": 2.934041815871549,
1810
+ "grad_norm": 0.775031328201294,
1811
+ "learning_rate": 4.349401516514837e-06,
1812
+ "loss": 0.2073,
1813
+ "step": 127000
1814
+ },
1815
+ {
1816
+ "epoch": 2.945593161603327,
1817
+ "grad_norm": 0.6462440490722656,
1818
+ "learning_rate": 4.325082805211987e-06,
1819
+ "loss": 0.2185,
1820
+ "step": 127500
1821
+ },
1822
+ {
1823
+ "epoch": 2.9571445073351046,
1824
+ "grad_norm": 2.127567768096924,
1825
+ "learning_rate": 4.300764093909136e-06,
1826
+ "loss": 0.2188,
1827
+ "step": 128000
1828
+ },
1829
+ {
1830
+ "epoch": 2.9686958530668823,
1831
+ "grad_norm": 0.3134165406227112,
1832
+ "learning_rate": 4.276445382606285e-06,
1833
+ "loss": 0.2495,
1834
+ "step": 128500
1835
+ },
1836
+ {
1837
+ "epoch": 2.98024719879866,
1838
+ "grad_norm": 0.4288942515850067,
1839
+ "learning_rate": 4.2521266713034345e-06,
1840
+ "loss": 0.2171,
1841
+ "step": 129000
1842
+ },
1843
+ {
1844
+ "epoch": 2.991798544530438,
1845
+ "grad_norm": 51.89253234863281,
1846
+ "learning_rate": 4.227807960000584e-06,
1847
+ "loss": 0.1999,
1848
+ "step": 129500
1849
+ },
1850
+ {
1851
+ "epoch": 3.0,
1852
+ "eval_accuracy": 0.887628508721266,
1853
+ "eval_f1": 0.8962368802798873,
1854
+ "eval_loss": 0.683422863483429,
1855
+ "eval_matthews_correlation": 0.7825720644338475,
1856
+ "eval_precision": 0.8417551592867161,
1857
+ "eval_recall": 0.9582592035034898,
1858
+ "eval_runtime": 541.3245,
1859
+ "eval_samples_per_second": 79.961,
1860
+ "eval_steps_per_second": 4.999,
1861
+ "step": 129855
1862
+ },
1863
+ {
1864
+ "epoch": 3.0033498902622155,
1865
+ "grad_norm": 8.645707130432129,
1866
+ "learning_rate": 4.203489248697734e-06,
1867
+ "loss": 0.2205,
1868
+ "step": 130000
1869
+ },
1870
+ {
1871
+ "epoch": 3.0149012359939933,
1872
+ "grad_norm": 0.025636987760663033,
1873
+ "learning_rate": 4.179170537394882e-06,
1874
+ "loss": 0.1589,
1875
+ "step": 130500
1876
+ },
1877
+ {
1878
+ "epoch": 3.026452581725771,
1879
+ "grad_norm": 9.634745597839355,
1880
+ "learning_rate": 4.154851826092032e-06,
1881
+ "loss": 0.1591,
1882
+ "step": 131000
1883
+ },
1884
+ {
1885
+ "epoch": 3.0380039274575488,
1886
+ "grad_norm": 0.9399522542953491,
1887
+ "learning_rate": 4.130533114789182e-06,
1888
+ "loss": 0.1473,
1889
+ "step": 131500
1890
+ },
1891
+ {
1892
+ "epoch": 3.0495552731893265,
1893
+ "grad_norm": 0.17956668138504028,
1894
+ "learning_rate": 4.106214403486331e-06,
1895
+ "loss": 0.1727,
1896
+ "step": 132000
1897
+ },
1898
+ {
1899
+ "epoch": 3.0611066189211042,
1900
+ "grad_norm": 36.57072067260742,
1901
+ "learning_rate": 4.08189569218348e-06,
1902
+ "loss": 0.1547,
1903
+ "step": 132500
1904
+ },
1905
+ {
1906
+ "epoch": 3.072657964652882,
1907
+ "grad_norm": 0.30588293075561523,
1908
+ "learning_rate": 4.0575769808806295e-06,
1909
+ "loss": 0.1727,
1910
+ "step": 133000
1911
+ },
1912
+ {
1913
+ "epoch": 3.0842093103846597,
1914
+ "grad_norm": 2.21580171585083,
1915
+ "learning_rate": 4.033258269577779e-06,
1916
+ "loss": 0.1519,
1917
+ "step": 133500
1918
+ },
1919
+ {
1920
+ "epoch": 3.0957606561164375,
1921
+ "grad_norm": 35.42125701904297,
1922
+ "learning_rate": 4.008939558274928e-06,
1923
+ "loss": 0.162,
1924
+ "step": 134000
1925
+ },
1926
+ {
1927
+ "epoch": 3.107312001848215,
1928
+ "grad_norm": 19.89476776123047,
1929
+ "learning_rate": 3.984620846972077e-06,
1930
+ "loss": 0.1489,
1931
+ "step": 134500
1932
+ },
1933
+ {
1934
+ "epoch": 3.118863347579993,
1935
+ "grad_norm": 0.5846690535545349,
1936
+ "learning_rate": 3.960302135669227e-06,
1937
+ "loss": 0.1838,
1938
+ "step": 135000
1939
+ },
1940
+ {
1941
+ "epoch": 3.1304146933117707,
1942
+ "grad_norm": 33.746910095214844,
1943
+ "learning_rate": 3.935983424366377e-06,
1944
+ "loss": 0.1657,
1945
+ "step": 135500
1946
+ },
1947
+ {
1948
+ "epoch": 3.1419660390435484,
1949
+ "grad_norm": 0.1903204321861267,
1950
+ "learning_rate": 3.911664713063525e-06,
1951
+ "loss": 0.1431,
1952
+ "step": 136000
1953
+ },
1954
+ {
1955
+ "epoch": 3.153517384775326,
1956
+ "grad_norm": 1.768623948097229,
1957
+ "learning_rate": 3.887346001760675e-06,
1958
+ "loss": 0.157,
1959
+ "step": 136500
1960
+ },
1961
+ {
1962
+ "epoch": 3.165068730507104,
1963
+ "grad_norm": 9.588990211486816,
1964
+ "learning_rate": 3.8630272904578245e-06,
1965
+ "loss": 0.1508,
1966
+ "step": 137000
1967
+ },
1968
+ {
1969
+ "epoch": 3.1766200762388817,
1970
+ "grad_norm": 0.5915334820747375,
1971
+ "learning_rate": 3.838708579154974e-06,
1972
+ "loss": 0.1769,
1973
+ "step": 137500
1974
+ },
1975
+ {
1976
+ "epoch": 3.1881714219706594,
1977
+ "grad_norm": 8.681124687194824,
1978
+ "learning_rate": 3.814389867852123e-06,
1979
+ "loss": 0.1802,
1980
+ "step": 138000
1981
+ },
1982
+ {
1983
+ "epoch": 3.199722767702437,
1984
+ "grad_norm": 9.283745765686035,
1985
+ "learning_rate": 3.7900711565492724e-06,
1986
+ "loss": 0.1584,
1987
+ "step": 138500
1988
+ },
1989
+ {
1990
+ "epoch": 3.2112741134342153,
1991
+ "grad_norm": 0.6678259968757629,
1992
+ "learning_rate": 3.7657524452464216e-06,
1993
+ "loss": 0.1554,
1994
+ "step": 139000
1995
+ },
1996
+ {
1997
+ "epoch": 3.2228254591659926,
1998
+ "grad_norm": 0.31929489970207214,
1999
+ "learning_rate": 3.741433733943571e-06,
2000
+ "loss": 0.1702,
2001
+ "step": 139500
2002
+ },
2003
+ {
2004
+ "epoch": 3.234376804897771,
2005
+ "grad_norm": 2.6308133602142334,
2006
+ "learning_rate": 3.7171150226407203e-06,
2007
+ "loss": 0.1597,
2008
+ "step": 140000
2009
+ },
2010
+ {
2011
+ "epoch": 3.2459281506295485,
2012
+ "grad_norm": 0.37557047605514526,
2013
+ "learning_rate": 3.69279631133787e-06,
2014
+ "loss": 0.1542,
2015
+ "step": 140500
2016
+ },
2017
+ {
2018
+ "epoch": 3.2574794963613263,
2019
+ "grad_norm": 0.47243937849998474,
2020
+ "learning_rate": 3.668477600035019e-06,
2021
+ "loss": 0.1595,
2022
+ "step": 141000
2023
+ },
2024
+ {
2025
+ "epoch": 3.269030842093104,
2026
+ "grad_norm": 0.026057422161102295,
2027
+ "learning_rate": 3.6441588887321686e-06,
2028
+ "loss": 0.1917,
2029
+ "step": 141500
2030
+ },
2031
+ {
2032
+ "epoch": 3.2805821878248818,
2033
+ "grad_norm": 0.18791376054286957,
2034
+ "learning_rate": 3.619840177429318e-06,
2035
+ "loss": 0.1325,
2036
+ "step": 142000
2037
+ },
2038
+ {
2039
+ "epoch": 3.2921335335566595,
2040
+ "grad_norm": 0.5374308824539185,
2041
+ "learning_rate": 3.5955214661264674e-06,
2042
+ "loss": 0.1703,
2043
+ "step": 142500
2044
+ },
2045
+ {
2046
+ "epoch": 3.3036848792884372,
2047
+ "grad_norm": 0.0012748383451253176,
2048
+ "learning_rate": 3.5712027548236165e-06,
2049
+ "loss": 0.1509,
2050
+ "step": 143000
2051
+ },
2052
+ {
2053
+ "epoch": 3.315236225020215,
2054
+ "grad_norm": 0.19805267453193665,
2055
+ "learning_rate": 3.5468840435207657e-06,
2056
+ "loss": 0.1437,
2057
+ "step": 143500
2058
+ },
2059
+ {
2060
+ "epoch": 3.3267875707519927,
2061
+ "grad_norm": 15.450338363647461,
2062
+ "learning_rate": 3.5225653322179153e-06,
2063
+ "loss": 0.1741,
2064
+ "step": 144000
2065
+ },
2066
+ {
2067
+ "epoch": 3.3383389164837705,
2068
+ "grad_norm": 0.3308798670768738,
2069
+ "learning_rate": 3.4982466209150645e-06,
2070
+ "loss": 0.1392,
2071
+ "step": 144500
2072
+ },
2073
+ {
2074
+ "epoch": 3.349890262215548,
2075
+ "grad_norm": 21.04158592224121,
2076
+ "learning_rate": 3.473927909612214e-06,
2077
+ "loss": 0.1704,
2078
+ "step": 145000
2079
+ },
2080
+ {
2081
+ "epoch": 3.361441607947326,
2082
+ "grad_norm": 8.919427871704102,
2083
+ "learning_rate": 3.449609198309363e-06,
2084
+ "loss": 0.158,
2085
+ "step": 145500
2086
+ },
2087
+ {
2088
+ "epoch": 3.3729929536791037,
2089
+ "grad_norm": 0.04234965518116951,
2090
+ "learning_rate": 3.425290487006513e-06,
2091
+ "loss": 0.1676,
2092
+ "step": 146000
2093
+ },
2094
+ {
2095
+ "epoch": 3.3845442994108814,
2096
+ "grad_norm": 0.7735577821731567,
2097
+ "learning_rate": 3.400971775703662e-06,
2098
+ "loss": 0.1717,
2099
+ "step": 146500
2100
+ },
2101
+ {
2102
+ "epoch": 3.396095645142659,
2103
+ "grad_norm": 0.0031218251679092646,
2104
+ "learning_rate": 3.3766530644008115e-06,
2105
+ "loss": 0.1852,
2106
+ "step": 147000
2107
+ },
2108
+ {
2109
+ "epoch": 3.407646990874437,
2110
+ "grad_norm": 0.2701342701911926,
2111
+ "learning_rate": 3.3523343530979607e-06,
2112
+ "loss": 0.178,
2113
+ "step": 147500
2114
+ },
2115
+ {
2116
+ "epoch": 3.4191983366062146,
2117
+ "grad_norm": 14.016953468322754,
2118
+ "learning_rate": 3.3280156417951103e-06,
2119
+ "loss": 0.1832,
2120
+ "step": 148000
2121
+ },
2122
+ {
2123
+ "epoch": 3.4307496823379924,
2124
+ "grad_norm": 0.002833212958648801,
2125
+ "learning_rate": 3.3036969304922594e-06,
2126
+ "loss": 0.1431,
2127
+ "step": 148500
2128
+ },
2129
+ {
2130
+ "epoch": 3.44230102806977,
2131
+ "grad_norm": 0.38238525390625,
2132
+ "learning_rate": 3.279378219189409e-06,
2133
+ "loss": 0.1736,
2134
+ "step": 149000
2135
+ },
2136
+ {
2137
+ "epoch": 3.453852373801548,
2138
+ "grad_norm": 0.019360702484846115,
2139
+ "learning_rate": 3.255059507886558e-06,
2140
+ "loss": 0.1589,
2141
+ "step": 149500
2142
+ },
2143
+ {
2144
+ "epoch": 3.4654037195333256,
2145
+ "grad_norm": 316.6242370605469,
2146
+ "learning_rate": 3.2307407965837078e-06,
2147
+ "loss": 0.1813,
2148
+ "step": 150000
2149
+ },
2150
+ {
2151
+ "epoch": 3.4769550652651033,
2152
+ "grad_norm": 0.6166911125183105,
2153
+ "learning_rate": 3.206422085280857e-06,
2154
+ "loss": 0.1562,
2155
+ "step": 150500
2156
+ },
2157
+ {
2158
+ "epoch": 3.488506410996881,
2159
+ "grad_norm": 0.12754186987876892,
2160
+ "learning_rate": 3.1821033739780065e-06,
2161
+ "loss": 0.1826,
2162
+ "step": 151000
2163
+ },
2164
+ {
2165
+ "epoch": 3.500057756728659,
2166
+ "grad_norm": 1.4088376760482788,
2167
+ "learning_rate": 3.1577846626751557e-06,
2168
+ "loss": 0.1751,
2169
+ "step": 151500
2170
+ },
2171
+ {
2172
+ "epoch": 3.5116091024604366,
2173
+ "grad_norm": 0.6573951840400696,
2174
+ "learning_rate": 3.1334659513723053e-06,
2175
+ "loss": 0.1612,
2176
+ "step": 152000
2177
+ },
2178
+ {
2179
+ "epoch": 3.5231604481922143,
2180
+ "grad_norm": 0.6012342572212219,
2181
+ "learning_rate": 3.1091472400694544e-06,
2182
+ "loss": 0.1734,
2183
+ "step": 152500
2184
+ },
2185
+ {
2186
+ "epoch": 3.534711793923992,
2187
+ "grad_norm": 0.6947406530380249,
2188
+ "learning_rate": 3.084828528766604e-06,
2189
+ "loss": 0.1574,
2190
+ "step": 153000
2191
+ },
2192
+ {
2193
+ "epoch": 3.54626313965577,
2194
+ "grad_norm": 0.49752572178840637,
2195
+ "learning_rate": 3.060509817463753e-06,
2196
+ "loss": 0.142,
2197
+ "step": 153500
2198
+ },
2199
+ {
2200
+ "epoch": 3.5578144853875475,
2201
+ "grad_norm": 0.6474779844284058,
2202
+ "learning_rate": 3.0361911061609028e-06,
2203
+ "loss": 0.1511,
2204
+ "step": 154000
2205
+ },
2206
+ {
2207
+ "epoch": 3.5693658311193253,
2208
+ "grad_norm": 1.3100844621658325,
2209
+ "learning_rate": 3.011872394858052e-06,
2210
+ "loss": 0.1652,
2211
+ "step": 154500
2212
+ },
2213
+ {
2214
+ "epoch": 3.580917176851103,
2215
+ "grad_norm": 0.9856226444244385,
2216
+ "learning_rate": 2.987553683555201e-06,
2217
+ "loss": 0.1884,
2218
+ "step": 155000
2219
+ },
2220
+ {
2221
+ "epoch": 3.5924685225828807,
2222
+ "grad_norm": 73.24075317382812,
2223
+ "learning_rate": 2.9632349722523507e-06,
2224
+ "loss": 0.1629,
2225
+ "step": 155500
2226
+ },
2227
+ {
2228
+ "epoch": 3.6040198683146585,
2229
+ "grad_norm": 8.039386749267578,
2230
+ "learning_rate": 2.9389162609495e-06,
2231
+ "loss": 0.1628,
2232
+ "step": 156000
2233
+ },
2234
+ {
2235
+ "epoch": 3.6155712140464367,
2236
+ "grad_norm": 0.0022829582449048758,
2237
+ "learning_rate": 2.9145975496466494e-06,
2238
+ "loss": 0.1532,
2239
+ "step": 156500
2240
+ },
2241
+ {
2242
+ "epoch": 3.627122559778214,
2243
+ "grad_norm": 0.43138188123703003,
2244
+ "learning_rate": 2.8902788383437986e-06,
2245
+ "loss": 0.1711,
2246
+ "step": 157000
2247
+ },
2248
+ {
2249
+ "epoch": 3.638673905509992,
2250
+ "grad_norm": 7.881419658660889,
2251
+ "learning_rate": 2.865960127040948e-06,
2252
+ "loss": 0.155,
2253
+ "step": 157500
2254
+ },
2255
+ {
2256
+ "epoch": 3.6502252512417694,
2257
+ "grad_norm": 117.18584442138672,
2258
+ "learning_rate": 2.8416414157380973e-06,
2259
+ "loss": 0.155,
2260
+ "step": 158000
2261
+ },
2262
+ {
2263
+ "epoch": 3.6617765969735476,
2264
+ "grad_norm": 0.0010274089872837067,
2265
+ "learning_rate": 2.817322704435247e-06,
2266
+ "loss": 0.1717,
2267
+ "step": 158500
2268
+ },
2269
+ {
2270
+ "epoch": 3.673327942705325,
2271
+ "grad_norm": 109.89828491210938,
2272
+ "learning_rate": 2.793003993132396e-06,
2273
+ "loss": 0.1608,
2274
+ "step": 159000
2275
+ },
2276
+ {
2277
+ "epoch": 3.684879288437103,
2278
+ "grad_norm": 0.7722026109695435,
2279
+ "learning_rate": 2.7686852818295457e-06,
2280
+ "loss": 0.1789,
2281
+ "step": 159500
2282
+ },
2283
+ {
2284
+ "epoch": 3.6964306341688804,
2285
+ "grad_norm": 0.0013100790092721581,
2286
+ "learning_rate": 2.744366570526695e-06,
2287
+ "loss": 0.1601,
2288
+ "step": 160000
2289
+ },
2290
+ {
2291
+ "epoch": 3.7079819799006586,
2292
+ "grad_norm": 0.25156673789024353,
2293
+ "learning_rate": 2.7200478592238444e-06,
2294
+ "loss": 0.1588,
2295
+ "step": 160500
2296
+ },
2297
+ {
2298
+ "epoch": 3.7195333256324363,
2299
+ "grad_norm": 0.18242871761322021,
2300
+ "learning_rate": 2.6957291479209936e-06,
2301
+ "loss": 0.1626,
2302
+ "step": 161000
2303
+ },
2304
+ {
2305
+ "epoch": 3.731084671364214,
2306
+ "grad_norm": 0.008753698319196701,
2307
+ "learning_rate": 2.671410436618143e-06,
2308
+ "loss": 0.1654,
2309
+ "step": 161500
2310
+ },
2311
+ {
2312
+ "epoch": 3.742636017095992,
2313
+ "grad_norm": 0.6226741671562195,
2314
+ "learning_rate": 2.6470917253152923e-06,
2315
+ "loss": 0.174,
2316
+ "step": 162000
2317
+ },
2318
+ {
2319
+ "epoch": 3.7541873628277695,
2320
+ "grad_norm": 12.760992050170898,
2321
+ "learning_rate": 2.622773014012442e-06,
2322
+ "loss": 0.1656,
2323
+ "step": 162500
2324
+ },
2325
+ {
2326
+ "epoch": 3.7657387085595473,
2327
+ "grad_norm": 16.012075424194336,
2328
+ "learning_rate": 2.598454302709591e-06,
2329
+ "loss": 0.15,
2330
+ "step": 163000
2331
+ },
2332
+ {
2333
+ "epoch": 3.777290054291325,
2334
+ "grad_norm": 7.900235652923584,
2335
+ "learning_rate": 2.5741355914067407e-06,
2336
+ "loss": 0.1705,
2337
+ "step": 163500
2338
+ },
2339
+ {
2340
+ "epoch": 3.7888414000231028,
2341
+ "grad_norm": 2.0820932388305664,
2342
+ "learning_rate": 2.54981688010389e-06,
2343
+ "loss": 0.1752,
2344
+ "step": 164000
2345
+ },
2346
+ {
2347
+ "epoch": 3.8003927457548805,
2348
+ "grad_norm": 1.189961552619934,
2349
+ "learning_rate": 2.5254981688010394e-06,
2350
+ "loss": 0.1578,
2351
+ "step": 164500
2352
+ },
2353
+ {
2354
+ "epoch": 3.8119440914866582,
2355
+ "grad_norm": 0.005685470532625914,
2356
+ "learning_rate": 2.5011794574981886e-06,
2357
+ "loss": 0.147,
2358
+ "step": 165000
2359
+ },
2360
+ {
2361
+ "epoch": 3.823495437218436,
2362
+ "grad_norm": 0.33695000410079956,
2363
+ "learning_rate": 2.4768607461953377e-06,
2364
+ "loss": 0.1835,
2365
+ "step": 165500
2366
+ },
2367
+ {
2368
+ "epoch": 3.8350467829502137,
2369
+ "grad_norm": 18.306541442871094,
2370
+ "learning_rate": 2.4525420348924873e-06,
2371
+ "loss": 0.1963,
2372
+ "step": 166000
2373
+ },
2374
+ {
2375
+ "epoch": 3.8465981286819915,
2376
+ "grad_norm": 0.6380665302276611,
2377
+ "learning_rate": 2.4282233235896365e-06,
2378
+ "loss": 0.1696,
2379
+ "step": 166500
2380
+ },
2381
+ {
2382
+ "epoch": 3.858149474413769,
2383
+ "grad_norm": 0.31257039308547974,
2384
+ "learning_rate": 2.403904612286786e-06,
2385
+ "loss": 0.166,
2386
+ "step": 167000
2387
+ },
2388
+ {
2389
+ "epoch": 3.869700820145547,
2390
+ "grad_norm": 0.09384506940841675,
2391
+ "learning_rate": 2.3795859009839352e-06,
2392
+ "loss": 0.1734,
2393
+ "step": 167500
2394
+ },
2395
+ {
2396
+ "epoch": 3.8812521658773247,
2397
+ "grad_norm": 0.12250012159347534,
2398
+ "learning_rate": 2.355267189681085e-06,
2399
+ "loss": 0.1705,
2400
+ "step": 168000
2401
+ },
2402
+ {
2403
+ "epoch": 3.8928035116091024,
2404
+ "grad_norm": 0.3609682619571686,
2405
+ "learning_rate": 2.330948478378234e-06,
2406
+ "loss": 0.1548,
2407
+ "step": 168500
2408
+ },
2409
+ {
2410
+ "epoch": 3.90435485734088,
2411
+ "grad_norm": 9.560044288635254,
2412
+ "learning_rate": 2.3066297670753836e-06,
2413
+ "loss": 0.1468,
2414
+ "step": 169000
2415
+ },
2416
+ {
2417
+ "epoch": 3.915906203072658,
2418
+ "grad_norm": 1.053947925567627,
2419
+ "learning_rate": 2.2823110557725327e-06,
2420
+ "loss": 0.1686,
2421
+ "step": 169500
2422
+ },
2423
+ {
2424
+ "epoch": 3.9274575488044356,
2425
+ "grad_norm": 0.032445378601551056,
2426
+ "learning_rate": 2.2579923444696823e-06,
2427
+ "loss": 0.1487,
2428
+ "step": 170000
2429
+ },
2430
+ {
2431
+ "epoch": 3.9390088945362134,
2432
+ "grad_norm": 8.668307304382324,
2433
+ "learning_rate": 2.2336736331668315e-06,
2434
+ "loss": 0.1791,
2435
+ "step": 170500
2436
+ },
2437
+ {
2438
+ "epoch": 3.950560240267991,
2439
+ "grad_norm": 9.52789306640625,
2440
+ "learning_rate": 2.2093549218639806e-06,
2441
+ "loss": 0.1841,
2442
+ "step": 171000
2443
+ },
2444
+ {
2445
+ "epoch": 3.962111585999769,
2446
+ "grad_norm": 0.011110074818134308,
2447
+ "learning_rate": 2.18503621056113e-06,
2448
+ "loss": 0.1625,
2449
+ "step": 171500
2450
+ },
2451
+ {
2452
+ "epoch": 3.9736629317315466,
2453
+ "grad_norm": 82.39501190185547,
2454
+ "learning_rate": 2.1607174992582794e-06,
2455
+ "loss": 0.1714,
2456
+ "step": 172000
2457
+ },
2458
+ {
2459
+ "epoch": 3.9852142774633244,
2460
+ "grad_norm": 1.0335499048233032,
2461
+ "learning_rate": 2.136398787955429e-06,
2462
+ "loss": 0.15,
2463
+ "step": 172500
2464
+ },
2465
+ {
2466
+ "epoch": 3.996765623195102,
2467
+ "grad_norm": 2.844334363937378,
2468
+ "learning_rate": 2.112080076652578e-06,
2469
+ "loss": 0.1628,
2470
+ "step": 173000
2471
+ },
2472
+ {
2473
+ "epoch": 4.0,
2474
+ "eval_accuracy": 0.8855723691810096,
2475
+ "eval_f1": 0.895216738242823,
2476
+ "eval_loss": 0.9363193511962891,
2477
+ "eval_matthews_correlation": 0.7805306223945409,
2478
+ "eval_precision": 0.8347009626005997,
2479
+ "eval_recall": 0.9651931937411614,
2480
+ "eval_runtime": 550.7934,
2481
+ "eval_samples_per_second": 78.587,
2482
+ "eval_steps_per_second": 4.913,
2483
+ "step": 173140
2484
+ }
2485
+ ],
2486
+ "logging_steps": 500,
2487
+ "max_steps": 216425,
2488
+ "num_input_tokens_seen": 0,
2489
+ "num_train_epochs": 5,
2490
+ "save_steps": 200,
2491
+ "stateful_callbacks": {
2492
+ "TrainerControl": {
2493
+ "args": {
2494
+ "should_epoch_stop": false,
2495
+ "should_evaluate": false,
2496
+ "should_log": false,
2497
+ "should_save": true,
2498
+ "should_training_stop": false
2499
+ },
2500
+ "attributes": {}
2501
+ }
2502
+ },
2503
+ "total_flos": 3.851751926497198e+17,
2504
+ "train_batch_size": 8,
2505
+ "trial_name": null,
2506
+ "trial_params": null
2507
+ }
checkpoint-173140/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e57c2bf20bb8ccd683529a85ac381c4cb1e75777081620776ff48030aab98505
3
+ size 5432
checkpoint-216425/config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/plant-dnamamba-BPE",
3
+ "architectures": [
4
+ "MambaForSequenceClassification"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_mamba.MambaConfig",
8
+ "AutoModel": "modeling_mamba.MambaModel",
9
+ "AutoModelForCausalLM": "modeling_mamba.MambaForCausalLM",
10
+ "AutoModelForSequenceClassification": "modeling_mamba.MambaForSequenceClassification"
11
+ },
12
+ "bos_token_id": 0,
13
+ "conv_kernel": 4,
14
+ "d_inner": 1536,
15
+ "d_model": 768,
16
+ "eos_token_id": 0,
17
+ "expand": 2,
18
+ "fused_add_norm": true,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 768,
21
+ "initializer_range": 0.1,
22
+ "intermediate_size": 1536,
23
+ "layer_norm_epsilon": 1e-05,
24
+ "model_type": "mamba",
25
+ "n_layer": 24,
26
+ "num_hidden_layers": 24,
27
+ "pad_token_id": 0,
28
+ "pad_vocab_size_multiple": 8,
29
+ "problem_type": "single_label_classification",
30
+ "rescale_prenorm_residual": false,
31
+ "residual_in_fp32": true,
32
+ "rms_norm": true,
33
+ "ssm_cfg": {},
34
+ "state_size": 16,
35
+ "time_step_floor": 0.0001,
36
+ "time_step_init_scheme": "random",
37
+ "time_step_max": 0.1,
38
+ "time_step_min": 0.001,
39
+ "time_step_rank": 48,
40
+ "time_step_scale": 1.0,
41
+ "torch_dtype": "float32",
42
+ "transformers_version": "4.45.2",
43
+ "use_bias": false,
44
+ "use_cache": false,
45
+ "use_conv_bias": true,
46
+ "use_mambapy": true,
47
+ "vocab_size": 8000
48
+ }
checkpoint-216425/configuration_mamba.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MAMBA configuration"""
16
+
17
+ import math
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+
26
+ class MambaConfig(PretrainedConfig):
27
+ """
28
+ This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA
29
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
30
+ defaults will yield a similar configuration to that of the MAMBA
31
+ [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture.
32
+
33
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
34
+ documentation from [`PretrainedConfig`] for more information.
35
+
36
+
37
+ Args:
38
+ vocab_size (`int`, *optional*, defaults to 50280):
39
+ Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the
40
+ `inputs_ids` passed when calling [`MambaModel`].
41
+ hidden_size (`int`, *optional*, defaults to 768):
42
+ Dimensionality of the embeddings and hidden states.
43
+ state_size (`int`, *optional*, defaults to 16): shape of the state space latents.
44
+ num_hidden_layers (`int`, *optional*, defaults to 32):
45
+ Number of hidden layers in the model.
46
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
47
+ The epsilon to use in the layer normalization layers.
48
+ pad_token_id (`int`, *optional*, defaults to 0):
49
+ Padding token id.
50
+ bos_token_id (`int`, *optional*, defaults to 0):
51
+ The id of the beginning of sentence token in the vocabulary.
52
+ eos_token_id (`int`, *optional*, defaults to 0):
53
+ The id of the end of sentence token in the vocabulary.
54
+ expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
55
+ conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
56
+ use_bias (`bool`, *optional*, defaults to `False`):
57
+ Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
58
+ use_conv_bias (`bool`, *optional*, defaults to `True`):
59
+ Whether or not to use bias in the convolution layer of the mixer block.
60
+ hidden_act (`str`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ initializer_range (`float`, *optional*, defaults to 0.1):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ residual_in_fp32 (`bool`, *optional*, defaults to `True`):
65
+ Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
66
+ time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
67
+ Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
68
+ time_step_scale (`float`, *optional*, defaults to 1.0):
69
+ Scale used used to scale `dt_proj.bias`.
70
+ time_step_min (`float`, *optional*, defaults to 0.001):
71
+ Minimum `time_step` used to bound `dt_proj.bias`.
72
+ time_step_max (`float`, *optional*, defaults to 0.1):
73
+ Maximum `time_step` used to bound `dt_proj.bias`.
74
+ time_step_init_scheme (`float`, *optional*, defaults to `"random"`):
75
+ Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`
76
+ time_step_floor (`float`, *optional*, defaults to 0.0001):
77
+ Minimum clamping value of the `dt_proj.bias` layer initialization.
78
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `False`):
79
+ Whether or not to rescale `out_proj` weights when initializing.
80
+ use_cache (`bool`, *optional*, defaults to `True`):
81
+ Whether or not the cache should be used.
82
+ use_mambapy (`bool`, *optional*, defaults to `False`):
83
+ Determines the fallback strategy during training if the CUDA-based official implementation of Mamba is not avaiable. If `True`, the mamba.py implementation is used. If `False`, the naive and slower implementation is used. Consider switching to the naive version if memory is limited.
84
+
85
+
86
+ Example:
87
+
88
+ ```python
89
+ >>> from transformers import MambaConfig, MambaModel
90
+
91
+ >>> # Initializing a Mamba configuration
92
+ >>> configuration = MambaConfig()
93
+
94
+ >>> # Initializing a model (with random weights) from the configuration
95
+ >>> model = MambaModel(configuration)
96
+
97
+ >>> # Accessing the model configuration
98
+ >>> configuration = model.config
99
+ ```"""
100
+
101
+ model_type = "mamba"
102
+
103
+ def __init__(
104
+ self,
105
+ vocab_size=50280,
106
+ hidden_size=768,
107
+ state_size=16,
108
+ num_hidden_layers=32,
109
+ layer_norm_epsilon=1e-5,
110
+ pad_token_id=0,
111
+ bos_token_id=0,
112
+ eos_token_id=0,
113
+ expand=2,
114
+ conv_kernel=4,
115
+ use_bias=False,
116
+ use_conv_bias=True,
117
+ hidden_act="silu",
118
+ initializer_range=0.1,
119
+ residual_in_fp32=True,
120
+ time_step_rank="auto",
121
+ time_step_scale=1.0,
122
+ time_step_min=0.001,
123
+ time_step_max=0.1,
124
+ time_step_init_scheme="random",
125
+ time_step_floor=1e-4,
126
+ rescale_prenorm_residual=False,
127
+ use_cache=True,
128
+ use_mambapy=False,
129
+ **kwargs,
130
+ ):
131
+ self.vocab_size = vocab_size
132
+ self.hidden_size = hidden_size
133
+ self.state_size = state_size
134
+ self.num_hidden_layers = num_hidden_layers
135
+ self.layer_norm_epsilon = layer_norm_epsilon
136
+ self.conv_kernel = conv_kernel
137
+ self.expand = expand
138
+ self.intermediate_size = int(expand * self.hidden_size)
139
+ self.bos_token_id = bos_token_id
140
+ self.eos_token_id = eos_token_id
141
+ self.pad_token_id = pad_token_id
142
+ self.use_bias = use_bias
143
+ self.use_conv_bias = use_conv_bias
144
+ self.hidden_act = hidden_act
145
+ self.initializer_range = initializer_range
146
+ self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
147
+ self.time_step_scale = time_step_scale
148
+ self.time_step_min = time_step_min
149
+ self.time_step_max = time_step_max
150
+ self.time_step_init_scheme = time_step_init_scheme
151
+ self.time_step_floor = time_step_floor
152
+ self.rescale_prenorm_residual = rescale_prenorm_residual
153
+ self.residual_in_fp32 = residual_in_fp32
154
+ self.use_cache = use_cache
155
+ self.use_mambapy = use_mambapy
156
+
157
+ super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, pad_token_id=pad_token_id, **kwargs)
checkpoint-216425/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35b0771fcc6ccbf5b888641232cd6fd6d1d2f34ab28a8b659bf6bfe9a96d2506
3
+ size 386689624
checkpoint-216425/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ceb6d9ebfe2050f5f2613cb6f9e834794b0deabc2e60dae2be6126d07bbf0e8
3
+ size 773531386
checkpoint-216425/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5e5ddd402f202401f9ff5ebd5d2c03d3a20b67cf423c5a965ccdf12d7a97296
3
+ size 14244
checkpoint-216425/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ae586b2b25f78122188d8810f751453ad264e8f5b1e1ebd7cf75777e4c021ac
3
+ size 1064
checkpoint-216425/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
checkpoint-216425/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-216425/tokenizer_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": false,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<|padding|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ }
22
+ },
23
+ "bos_token": "<|endoftext|>",
24
+ "clean_up_tokenization_spaces": true,
25
+ "eos_token": "<|endoftext|>",
26
+ "max_length": 1024,
27
+ "model_max_length": 512,
28
+ "pad_to_multiple_of": null,
29
+ "pad_token": "<|endoftext|>",
30
+ "pad_token_type_id": 0,
31
+ "padding_side": "right",
32
+ "stride": 0,
33
+ "tokenizer_class": "GPTNeoXTokenizer",
34
+ "truncation_side": "right",
35
+ "truncation_strategy": "longest_first",
36
+ "unk_token": "<|endoftext|>"
37
+ }
checkpoint-216425/trainer_state.json ADDED
@@ -0,0 +1,3122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.9035956450129409,
3
+ "best_model_checkpoint": "finetune/models/plant-dnamamba-BPE-NoduleExpDup/checkpoint-216425",
4
+ "epoch": 5.0,
5
+ "eval_steps": 500,
6
+ "global_step": 216425,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.011551345731777752,
13
+ "grad_norm": 29.45172119140625,
14
+ "learning_rate": 4.620218074293107e-07,
15
+ "loss": 1.0687,
16
+ "step": 500
17
+ },
18
+ {
19
+ "epoch": 0.023102691463555505,
20
+ "grad_norm": 31.642974853515625,
21
+ "learning_rate": 9.240436148586214e-07,
22
+ "loss": 0.9807,
23
+ "step": 1000
24
+ },
25
+ {
26
+ "epoch": 0.034654037195333255,
27
+ "grad_norm": 12.195789337158203,
28
+ "learning_rate": 1.3860654222879322e-06,
29
+ "loss": 0.8373,
30
+ "step": 1500
31
+ },
32
+ {
33
+ "epoch": 0.04620538292711101,
34
+ "grad_norm": 13.536879539489746,
35
+ "learning_rate": 1.8480872297172428e-06,
36
+ "loss": 0.7818,
37
+ "step": 2000
38
+ },
39
+ {
40
+ "epoch": 0.05775672865888876,
41
+ "grad_norm": 13.85306453704834,
42
+ "learning_rate": 2.3101090371465534e-06,
43
+ "loss": 0.7234,
44
+ "step": 2500
45
+ },
46
+ {
47
+ "epoch": 0.06930807439066651,
48
+ "grad_norm": 11.797886848449707,
49
+ "learning_rate": 2.7721308445758644e-06,
50
+ "loss": 0.6925,
51
+ "step": 3000
52
+ },
53
+ {
54
+ "epoch": 0.08085942012244426,
55
+ "grad_norm": 6.770650386810303,
56
+ "learning_rate": 3.2341526520051746e-06,
57
+ "loss": 0.6751,
58
+ "step": 3500
59
+ },
60
+ {
61
+ "epoch": 0.09241076585422202,
62
+ "grad_norm": 10.289820671081543,
63
+ "learning_rate": 3.6961744594344856e-06,
64
+ "loss": 0.6636,
65
+ "step": 4000
66
+ },
67
+ {
68
+ "epoch": 0.10396211158599977,
69
+ "grad_norm": 15.198243141174316,
70
+ "learning_rate": 4.158196266863797e-06,
71
+ "loss": 0.6563,
72
+ "step": 4500
73
+ },
74
+ {
75
+ "epoch": 0.11551345731777753,
76
+ "grad_norm": 13.971009254455566,
77
+ "learning_rate": 4.620218074293107e-06,
78
+ "loss": 0.6481,
79
+ "step": 5000
80
+ },
81
+ {
82
+ "epoch": 0.12706480304955528,
83
+ "grad_norm": 8.441276550292969,
84
+ "learning_rate": 5.082239881722417e-06,
85
+ "loss": 0.6549,
86
+ "step": 5500
87
+ },
88
+ {
89
+ "epoch": 0.13861614878133302,
90
+ "grad_norm": 14.8948335647583,
91
+ "learning_rate": 5.544261689151729e-06,
92
+ "loss": 0.6507,
93
+ "step": 6000
94
+ },
95
+ {
96
+ "epoch": 0.1501674945131108,
97
+ "grad_norm": 7.725317478179932,
98
+ "learning_rate": 6.006283496581039e-06,
99
+ "loss": 0.6544,
100
+ "step": 6500
101
+ },
102
+ {
103
+ "epoch": 0.16171884024488853,
104
+ "grad_norm": 17.12333106994629,
105
+ "learning_rate": 6.468305304010349e-06,
106
+ "loss": 0.6476,
107
+ "step": 7000
108
+ },
109
+ {
110
+ "epoch": 0.17327018597666627,
111
+ "grad_norm": 7.033656120300293,
112
+ "learning_rate": 6.930327111439661e-06,
113
+ "loss": 0.6496,
114
+ "step": 7500
115
+ },
116
+ {
117
+ "epoch": 0.18482153170844404,
118
+ "grad_norm": 7.053399085998535,
119
+ "learning_rate": 7.392348918868971e-06,
120
+ "loss": 0.6523,
121
+ "step": 8000
122
+ },
123
+ {
124
+ "epoch": 0.19637287744022178,
125
+ "grad_norm": 6.655728340148926,
126
+ "learning_rate": 7.854370726298282e-06,
127
+ "loss": 0.6455,
128
+ "step": 8500
129
+ },
130
+ {
131
+ "epoch": 0.20792422317199954,
132
+ "grad_norm": 8.08369255065918,
133
+ "learning_rate": 8.316392533727593e-06,
134
+ "loss": 0.6415,
135
+ "step": 9000
136
+ },
137
+ {
138
+ "epoch": 0.21947556890377729,
139
+ "grad_norm": 3.947148084640503,
140
+ "learning_rate": 8.778414341156903e-06,
141
+ "loss": 0.6514,
142
+ "step": 9500
143
+ },
144
+ {
145
+ "epoch": 0.23102691463555505,
146
+ "grad_norm": 8.264023780822754,
147
+ "learning_rate": 9.240436148586214e-06,
148
+ "loss": 0.6431,
149
+ "step": 10000
150
+ },
151
+ {
152
+ "epoch": 0.2425782603673328,
153
+ "grad_norm": 4.74052095413208,
154
+ "learning_rate": 9.702457956015525e-06,
155
+ "loss": 0.6331,
156
+ "step": 10500
157
+ },
158
+ {
159
+ "epoch": 0.25412960609911056,
160
+ "grad_norm": 12.625064849853516,
161
+ "learning_rate": 9.991342538776186e-06,
162
+ "loss": 0.6449,
163
+ "step": 11000
164
+ },
165
+ {
166
+ "epoch": 0.2656809518308883,
167
+ "grad_norm": 17.814157485961914,
168
+ "learning_rate": 9.967023827473336e-06,
169
+ "loss": 0.6414,
170
+ "step": 11500
171
+ },
172
+ {
173
+ "epoch": 0.27723229756266604,
174
+ "grad_norm": 15.34868049621582,
175
+ "learning_rate": 9.942705116170485e-06,
176
+ "loss": 0.6287,
177
+ "step": 12000
178
+ },
179
+ {
180
+ "epoch": 0.2887836432944438,
181
+ "grad_norm": 5.785658836364746,
182
+ "learning_rate": 9.918386404867634e-06,
183
+ "loss": 0.6417,
184
+ "step": 12500
185
+ },
186
+ {
187
+ "epoch": 0.3003349890262216,
188
+ "grad_norm": 3.7156598567962646,
189
+ "learning_rate": 9.894067693564784e-06,
190
+ "loss": 0.6436,
191
+ "step": 13000
192
+ },
193
+ {
194
+ "epoch": 0.3118863347579993,
195
+ "grad_norm": 12.272802352905273,
196
+ "learning_rate": 9.869748982261933e-06,
197
+ "loss": 0.6505,
198
+ "step": 13500
199
+ },
200
+ {
201
+ "epoch": 0.32343768048977706,
202
+ "grad_norm": 15.592353820800781,
203
+ "learning_rate": 9.845430270959082e-06,
204
+ "loss": 0.6313,
205
+ "step": 14000
206
+ },
207
+ {
208
+ "epoch": 0.3349890262215548,
209
+ "grad_norm": 4.684946537017822,
210
+ "learning_rate": 9.821111559656232e-06,
211
+ "loss": 0.6457,
212
+ "step": 14500
213
+ },
214
+ {
215
+ "epoch": 0.34654037195333254,
216
+ "grad_norm": 6.0165300369262695,
217
+ "learning_rate": 9.79679284835338e-06,
218
+ "loss": 0.641,
219
+ "step": 15000
220
+ },
221
+ {
222
+ "epoch": 0.35809171768511033,
223
+ "grad_norm": 10.924590110778809,
224
+ "learning_rate": 9.772474137050531e-06,
225
+ "loss": 0.6293,
226
+ "step": 15500
227
+ },
228
+ {
229
+ "epoch": 0.3696430634168881,
230
+ "grad_norm": 7.296951770782471,
231
+ "learning_rate": 9.74815542574768e-06,
232
+ "loss": 0.627,
233
+ "step": 16000
234
+ },
235
+ {
236
+ "epoch": 0.3811944091486658,
237
+ "grad_norm": 14.750964164733887,
238
+ "learning_rate": 9.723836714444829e-06,
239
+ "loss": 0.6254,
240
+ "step": 16500
241
+ },
242
+ {
243
+ "epoch": 0.39274575488044355,
244
+ "grad_norm": 7.377384662628174,
245
+ "learning_rate": 9.699518003141977e-06,
246
+ "loss": 0.6287,
247
+ "step": 17000
248
+ },
249
+ {
250
+ "epoch": 0.40429710061222135,
251
+ "grad_norm": 6.540673732757568,
252
+ "learning_rate": 9.675199291839128e-06,
253
+ "loss": 0.6309,
254
+ "step": 17500
255
+ },
256
+ {
257
+ "epoch": 0.4158484463439991,
258
+ "grad_norm": 6.4445295333862305,
259
+ "learning_rate": 9.650880580536277e-06,
260
+ "loss": 0.6302,
261
+ "step": 18000
262
+ },
263
+ {
264
+ "epoch": 0.42739979207577683,
265
+ "grad_norm": 7.85426664352417,
266
+ "learning_rate": 9.626561869233427e-06,
267
+ "loss": 0.6201,
268
+ "step": 18500
269
+ },
270
+ {
271
+ "epoch": 0.43895113780755457,
272
+ "grad_norm": 5.960766792297363,
273
+ "learning_rate": 9.602243157930576e-06,
274
+ "loss": 0.626,
275
+ "step": 19000
276
+ },
277
+ {
278
+ "epoch": 0.4505024835393323,
279
+ "grad_norm": 9.376029968261719,
280
+ "learning_rate": 9.577924446627726e-06,
281
+ "loss": 0.6164,
282
+ "step": 19500
283
+ },
284
+ {
285
+ "epoch": 0.4620538292711101,
286
+ "grad_norm": 17.05916976928711,
287
+ "learning_rate": 9.553605735324875e-06,
288
+ "loss": 0.6232,
289
+ "step": 20000
290
+ },
291
+ {
292
+ "epoch": 0.47360517500288785,
293
+ "grad_norm": 7.523278713226318,
294
+ "learning_rate": 9.529287024022024e-06,
295
+ "loss": 0.601,
296
+ "step": 20500
297
+ },
298
+ {
299
+ "epoch": 0.4851565207346656,
300
+ "grad_norm": 6.75208044052124,
301
+ "learning_rate": 9.504968312719172e-06,
302
+ "loss": 0.6157,
303
+ "step": 21000
304
+ },
305
+ {
306
+ "epoch": 0.4967078664664433,
307
+ "grad_norm": 12.919069290161133,
308
+ "learning_rate": 9.480649601416323e-06,
309
+ "loss": 0.6098,
310
+ "step": 21500
311
+ },
312
+ {
313
+ "epoch": 0.5082592121982211,
314
+ "grad_norm": 19.89571189880371,
315
+ "learning_rate": 9.456330890113472e-06,
316
+ "loss": 0.6131,
317
+ "step": 22000
318
+ },
319
+ {
320
+ "epoch": 0.5198105579299989,
321
+ "grad_norm": 7.436350345611572,
322
+ "learning_rate": 9.432012178810622e-06,
323
+ "loss": 0.6121,
324
+ "step": 22500
325
+ },
326
+ {
327
+ "epoch": 0.5313619036617766,
328
+ "grad_norm": 10.576221466064453,
329
+ "learning_rate": 9.40769346750777e-06,
330
+ "loss": 0.6005,
331
+ "step": 23000
332
+ },
333
+ {
334
+ "epoch": 0.5429132493935543,
335
+ "grad_norm": 12.731452941894531,
336
+ "learning_rate": 9.383374756204921e-06,
337
+ "loss": 0.6217,
338
+ "step": 23500
339
+ },
340
+ {
341
+ "epoch": 0.5544645951253321,
342
+ "grad_norm": 9.219985961914062,
343
+ "learning_rate": 9.35905604490207e-06,
344
+ "loss": 0.5985,
345
+ "step": 24000
346
+ },
347
+ {
348
+ "epoch": 0.5660159408571098,
349
+ "grad_norm": 12.659988403320312,
350
+ "learning_rate": 9.334737333599219e-06,
351
+ "loss": 0.5951,
352
+ "step": 24500
353
+ },
354
+ {
355
+ "epoch": 0.5775672865888876,
356
+ "grad_norm": 32.879234313964844,
357
+ "learning_rate": 9.310418622296367e-06,
358
+ "loss": 0.5961,
359
+ "step": 25000
360
+ },
361
+ {
362
+ "epoch": 0.5891186323206653,
363
+ "grad_norm": 9.763121604919434,
364
+ "learning_rate": 9.286099910993518e-06,
365
+ "loss": 0.6081,
366
+ "step": 25500
367
+ },
368
+ {
369
+ "epoch": 0.6006699780524432,
370
+ "grad_norm": 9.057079315185547,
371
+ "learning_rate": 9.261781199690667e-06,
372
+ "loss": 0.5863,
373
+ "step": 26000
374
+ },
375
+ {
376
+ "epoch": 0.6122213237842209,
377
+ "grad_norm": 17.381582260131836,
378
+ "learning_rate": 9.237462488387817e-06,
379
+ "loss": 0.5857,
380
+ "step": 26500
381
+ },
382
+ {
383
+ "epoch": 0.6237726695159986,
384
+ "grad_norm": 18.3979549407959,
385
+ "learning_rate": 9.213143777084966e-06,
386
+ "loss": 0.5853,
387
+ "step": 27000
388
+ },
389
+ {
390
+ "epoch": 0.6353240152477764,
391
+ "grad_norm": 11.13846206665039,
392
+ "learning_rate": 9.188825065782114e-06,
393
+ "loss": 0.582,
394
+ "step": 27500
395
+ },
396
+ {
397
+ "epoch": 0.6468753609795541,
398
+ "grad_norm": 8.506060600280762,
399
+ "learning_rate": 9.164506354479265e-06,
400
+ "loss": 0.5773,
401
+ "step": 28000
402
+ },
403
+ {
404
+ "epoch": 0.6584267067113319,
405
+ "grad_norm": 9.814123153686523,
406
+ "learning_rate": 9.140187643176414e-06,
407
+ "loss": 0.5797,
408
+ "step": 28500
409
+ },
410
+ {
411
+ "epoch": 0.6699780524431096,
412
+ "grad_norm": 11.70206069946289,
413
+ "learning_rate": 9.115868931873562e-06,
414
+ "loss": 0.5587,
415
+ "step": 29000
416
+ },
417
+ {
418
+ "epoch": 0.6815293981748873,
419
+ "grad_norm": 11.949383735656738,
420
+ "learning_rate": 9.091550220570711e-06,
421
+ "loss": 0.5769,
422
+ "step": 29500
423
+ },
424
+ {
425
+ "epoch": 0.6930807439066651,
426
+ "grad_norm": 11.51278018951416,
427
+ "learning_rate": 9.067231509267862e-06,
428
+ "loss": 0.5798,
429
+ "step": 30000
430
+ },
431
+ {
432
+ "epoch": 0.7046320896384429,
433
+ "grad_norm": 13.705997467041016,
434
+ "learning_rate": 9.042912797965012e-06,
435
+ "loss": 0.5585,
436
+ "step": 30500
437
+ },
438
+ {
439
+ "epoch": 0.7161834353702207,
440
+ "grad_norm": 18.50054168701172,
441
+ "learning_rate": 9.01859408666216e-06,
442
+ "loss": 0.5656,
443
+ "step": 31000
444
+ },
445
+ {
446
+ "epoch": 0.7277347811019984,
447
+ "grad_norm": 23.920692443847656,
448
+ "learning_rate": 8.99427537535931e-06,
449
+ "loss": 0.5493,
450
+ "step": 31500
451
+ },
452
+ {
453
+ "epoch": 0.7392861268337761,
454
+ "grad_norm": 15.425186157226562,
455
+ "learning_rate": 8.96995666405646e-06,
456
+ "loss": 0.5538,
457
+ "step": 32000
458
+ },
459
+ {
460
+ "epoch": 0.7508374725655539,
461
+ "grad_norm": 11.077018737792969,
462
+ "learning_rate": 8.945637952753609e-06,
463
+ "loss": 0.5553,
464
+ "step": 32500
465
+ },
466
+ {
467
+ "epoch": 0.7623888182973316,
468
+ "grad_norm": 17.128128051757812,
469
+ "learning_rate": 8.921319241450757e-06,
470
+ "loss": 0.5657,
471
+ "step": 33000
472
+ },
473
+ {
474
+ "epoch": 0.7739401640291094,
475
+ "grad_norm": 37.22261428833008,
476
+ "learning_rate": 8.897000530147906e-06,
477
+ "loss": 0.5573,
478
+ "step": 33500
479
+ },
480
+ {
481
+ "epoch": 0.7854915097608871,
482
+ "grad_norm": 18.649255752563477,
483
+ "learning_rate": 8.872681818845056e-06,
484
+ "loss": 0.5337,
485
+ "step": 34000
486
+ },
487
+ {
488
+ "epoch": 0.7970428554926648,
489
+ "grad_norm": 11.49511432647705,
490
+ "learning_rate": 8.848363107542205e-06,
491
+ "loss": 0.5442,
492
+ "step": 34500
493
+ },
494
+ {
495
+ "epoch": 0.8085942012244427,
496
+ "grad_norm": 20.64067840576172,
497
+ "learning_rate": 8.824044396239356e-06,
498
+ "loss": 0.5632,
499
+ "step": 35000
500
+ },
501
+ {
502
+ "epoch": 0.8201455469562204,
503
+ "grad_norm": 18.51673126220703,
504
+ "learning_rate": 8.799725684936504e-06,
505
+ "loss": 0.5454,
506
+ "step": 35500
507
+ },
508
+ {
509
+ "epoch": 0.8316968926879982,
510
+ "grad_norm": 18.139997482299805,
511
+ "learning_rate": 8.775406973633655e-06,
512
+ "loss": 0.5371,
513
+ "step": 36000
514
+ },
515
+ {
516
+ "epoch": 0.8432482384197759,
517
+ "grad_norm": 21.852693557739258,
518
+ "learning_rate": 8.751088262330804e-06,
519
+ "loss": 0.539,
520
+ "step": 36500
521
+ },
522
+ {
523
+ "epoch": 0.8547995841515537,
524
+ "grad_norm": 31.914161682128906,
525
+ "learning_rate": 8.726769551027952e-06,
526
+ "loss": 0.5383,
527
+ "step": 37000
528
+ },
529
+ {
530
+ "epoch": 0.8663509298833314,
531
+ "grad_norm": 9.24451732635498,
532
+ "learning_rate": 8.702450839725101e-06,
533
+ "loss": 0.548,
534
+ "step": 37500
535
+ },
536
+ {
537
+ "epoch": 0.8779022756151091,
538
+ "grad_norm": 27.3262882232666,
539
+ "learning_rate": 8.678132128422251e-06,
540
+ "loss": 0.524,
541
+ "step": 38000
542
+ },
543
+ {
544
+ "epoch": 0.8894536213468869,
545
+ "grad_norm": 23.231952667236328,
546
+ "learning_rate": 8.6538134171194e-06,
547
+ "loss": 0.5121,
548
+ "step": 38500
549
+ },
550
+ {
551
+ "epoch": 0.9010049670786646,
552
+ "grad_norm": 28.35572052001953,
553
+ "learning_rate": 8.62949470581655e-06,
554
+ "loss": 0.5333,
555
+ "step": 39000
556
+ },
557
+ {
558
+ "epoch": 0.9125563128104425,
559
+ "grad_norm": 24.61405372619629,
560
+ "learning_rate": 8.6051759945137e-06,
561
+ "loss": 0.5015,
562
+ "step": 39500
563
+ },
564
+ {
565
+ "epoch": 0.9241076585422202,
566
+ "grad_norm": 30.62819480895996,
567
+ "learning_rate": 8.580857283210848e-06,
568
+ "loss": 0.5092,
569
+ "step": 40000
570
+ },
571
+ {
572
+ "epoch": 0.935659004273998,
573
+ "grad_norm": 19.295331954956055,
574
+ "learning_rate": 8.556538571907999e-06,
575
+ "loss": 0.5125,
576
+ "step": 40500
577
+ },
578
+ {
579
+ "epoch": 0.9472103500057757,
580
+ "grad_norm": 10.785584449768066,
581
+ "learning_rate": 8.532219860605147e-06,
582
+ "loss": 0.5161,
583
+ "step": 41000
584
+ },
585
+ {
586
+ "epoch": 0.9587616957375534,
587
+ "grad_norm": 24.81365394592285,
588
+ "learning_rate": 8.507901149302296e-06,
589
+ "loss": 0.498,
590
+ "step": 41500
591
+ },
592
+ {
593
+ "epoch": 0.9703130414693312,
594
+ "grad_norm": 48.026973724365234,
595
+ "learning_rate": 8.483582437999446e-06,
596
+ "loss": 0.4916,
597
+ "step": 42000
598
+ },
599
+ {
600
+ "epoch": 0.9818643872011089,
601
+ "grad_norm": 6.653892517089844,
602
+ "learning_rate": 8.459263726696595e-06,
603
+ "loss": 0.4944,
604
+ "step": 42500
605
+ },
606
+ {
607
+ "epoch": 0.9934157329328867,
608
+ "grad_norm": 39.814945220947266,
609
+ "learning_rate": 8.434945015393746e-06,
610
+ "loss": 0.4954,
611
+ "step": 43000
612
+ },
613
+ {
614
+ "epoch": 1.0,
615
+ "eval_accuracy": 0.7796003234376805,
616
+ "eval_f1": 0.7890500619140279,
617
+ "eval_loss": 0.4737609326839447,
618
+ "eval_matthews_correlation": 0.5599099791389662,
619
+ "eval_precision": 0.7656524910955671,
620
+ "eval_recall": 0.8139227225035354,
621
+ "eval_runtime": 550.4755,
622
+ "eval_samples_per_second": 78.632,
623
+ "eval_steps_per_second": 4.916,
624
+ "step": 43285
625
+ },
626
+ {
627
+ "epoch": 1.0049670786646645,
628
+ "grad_norm": 22.5259952545166,
629
+ "learning_rate": 8.410626304090894e-06,
630
+ "loss": 0.4339,
631
+ "step": 43500
632
+ },
633
+ {
634
+ "epoch": 1.0165184243964422,
635
+ "grad_norm": 25.5583438873291,
636
+ "learning_rate": 8.386307592788043e-06,
637
+ "loss": 0.3274,
638
+ "step": 44000
639
+ },
640
+ {
641
+ "epoch": 1.02806977012822,
642
+ "grad_norm": 9.524903297424316,
643
+ "learning_rate": 8.361988881485194e-06,
644
+ "loss": 0.3372,
645
+ "step": 44500
646
+ },
647
+ {
648
+ "epoch": 1.0396211158599977,
649
+ "grad_norm": 37.77333068847656,
650
+ "learning_rate": 8.337670170182342e-06,
651
+ "loss": 0.3471,
652
+ "step": 45000
653
+ },
654
+ {
655
+ "epoch": 1.0511724615917755,
656
+ "grad_norm": 47.2765007019043,
657
+ "learning_rate": 8.313351458879491e-06,
658
+ "loss": 0.3436,
659
+ "step": 45500
660
+ },
661
+ {
662
+ "epoch": 1.0627238073235532,
663
+ "grad_norm": 58.79914474487305,
664
+ "learning_rate": 8.289032747576641e-06,
665
+ "loss": 0.3524,
666
+ "step": 46000
667
+ },
668
+ {
669
+ "epoch": 1.074275153055331,
670
+ "grad_norm": 23.910385131835938,
671
+ "learning_rate": 8.26471403627379e-06,
672
+ "loss": 0.3283,
673
+ "step": 46500
674
+ },
675
+ {
676
+ "epoch": 1.0858264987871087,
677
+ "grad_norm": 65.18335723876953,
678
+ "learning_rate": 8.24039532497094e-06,
679
+ "loss": 0.3529,
680
+ "step": 47000
681
+ },
682
+ {
683
+ "epoch": 1.0973778445188864,
684
+ "grad_norm": 29.333236694335938,
685
+ "learning_rate": 8.21607661366809e-06,
686
+ "loss": 0.3471,
687
+ "step": 47500
688
+ },
689
+ {
690
+ "epoch": 1.1089291902506642,
691
+ "grad_norm": 41.33262252807617,
692
+ "learning_rate": 8.191757902365238e-06,
693
+ "loss": 0.3507,
694
+ "step": 48000
695
+ },
696
+ {
697
+ "epoch": 1.120480535982442,
698
+ "grad_norm": 91.7591781616211,
699
+ "learning_rate": 8.167439191062387e-06,
700
+ "loss": 0.3506,
701
+ "step": 48500
702
+ },
703
+ {
704
+ "epoch": 1.1320318817142196,
705
+ "grad_norm": 7.366957187652588,
706
+ "learning_rate": 8.143120479759537e-06,
707
+ "loss": 0.3526,
708
+ "step": 49000
709
+ },
710
+ {
711
+ "epoch": 1.1435832274459974,
712
+ "grad_norm": 31.159626007080078,
713
+ "learning_rate": 8.118801768456686e-06,
714
+ "loss": 0.3423,
715
+ "step": 49500
716
+ },
717
+ {
718
+ "epoch": 1.1551345731777751,
719
+ "grad_norm": 15.73024845123291,
720
+ "learning_rate": 8.094483057153836e-06,
721
+ "loss": 0.3516,
722
+ "step": 50000
723
+ },
724
+ {
725
+ "epoch": 1.1666859189095529,
726
+ "grad_norm": 66.58766174316406,
727
+ "learning_rate": 8.070164345850985e-06,
728
+ "loss": 0.3435,
729
+ "step": 50500
730
+ },
731
+ {
732
+ "epoch": 1.1782372646413308,
733
+ "grad_norm": 48.182308197021484,
734
+ "learning_rate": 8.045845634548136e-06,
735
+ "loss": 0.344,
736
+ "step": 51000
737
+ },
738
+ {
739
+ "epoch": 1.1897886103731086,
740
+ "grad_norm": 42.61834716796875,
741
+ "learning_rate": 8.021526923245284e-06,
742
+ "loss": 0.3739,
743
+ "step": 51500
744
+ },
745
+ {
746
+ "epoch": 1.2013399561048863,
747
+ "grad_norm": 59.38914489746094,
748
+ "learning_rate": 7.997208211942433e-06,
749
+ "loss": 0.3399,
750
+ "step": 52000
751
+ },
752
+ {
753
+ "epoch": 1.212891301836664,
754
+ "grad_norm": 5.7647857666015625,
755
+ "learning_rate": 7.972889500639582e-06,
756
+ "loss": 0.3518,
757
+ "step": 52500
758
+ },
759
+ {
760
+ "epoch": 1.2244426475684418,
761
+ "grad_norm": 73.71373748779297,
762
+ "learning_rate": 7.948570789336732e-06,
763
+ "loss": 0.3523,
764
+ "step": 53000
765
+ },
766
+ {
767
+ "epoch": 1.2359939933002195,
768
+ "grad_norm": 14.03942584991455,
769
+ "learning_rate": 7.924252078033881e-06,
770
+ "loss": 0.3512,
771
+ "step": 53500
772
+ },
773
+ {
774
+ "epoch": 1.2475453390319973,
775
+ "grad_norm": 53.254791259765625,
776
+ "learning_rate": 7.899933366731031e-06,
777
+ "loss": 0.3454,
778
+ "step": 54000
779
+ },
780
+ {
781
+ "epoch": 1.259096684763775,
782
+ "grad_norm": 4.009112358093262,
783
+ "learning_rate": 7.87561465542818e-06,
784
+ "loss": 0.3406,
785
+ "step": 54500
786
+ },
787
+ {
788
+ "epoch": 1.2706480304955527,
789
+ "grad_norm": 37.73891067504883,
790
+ "learning_rate": 7.85129594412533e-06,
791
+ "loss": 0.3474,
792
+ "step": 55000
793
+ },
794
+ {
795
+ "epoch": 1.2821993762273305,
796
+ "grad_norm": 84.36405181884766,
797
+ "learning_rate": 7.82697723282248e-06,
798
+ "loss": 0.3551,
799
+ "step": 55500
800
+ },
801
+ {
802
+ "epoch": 1.2937507219591082,
803
+ "grad_norm": 43.2830810546875,
804
+ "learning_rate": 7.802658521519628e-06,
805
+ "loss": 0.3521,
806
+ "step": 56000
807
+ },
808
+ {
809
+ "epoch": 1.305302067690886,
810
+ "grad_norm": 143.8572998046875,
811
+ "learning_rate": 7.778339810216777e-06,
812
+ "loss": 0.3544,
813
+ "step": 56500
814
+ },
815
+ {
816
+ "epoch": 1.3168534134226637,
817
+ "grad_norm": 19.368745803833008,
818
+ "learning_rate": 7.754021098913927e-06,
819
+ "loss": 0.332,
820
+ "step": 57000
821
+ },
822
+ {
823
+ "epoch": 1.3284047591544415,
824
+ "grad_norm": 37.57652282714844,
825
+ "learning_rate": 7.729702387611076e-06,
826
+ "loss": 0.3614,
827
+ "step": 57500
828
+ },
829
+ {
830
+ "epoch": 1.3399561048862192,
831
+ "grad_norm": 12.710687637329102,
832
+ "learning_rate": 7.705383676308226e-06,
833
+ "loss": 0.3397,
834
+ "step": 58000
835
+ },
836
+ {
837
+ "epoch": 1.351507450617997,
838
+ "grad_norm": 9.734477043151855,
839
+ "learning_rate": 7.681064965005375e-06,
840
+ "loss": 0.3337,
841
+ "step": 58500
842
+ },
843
+ {
844
+ "epoch": 1.3630587963497747,
845
+ "grad_norm": 50.98896408081055,
846
+ "learning_rate": 7.656746253702524e-06,
847
+ "loss": 0.3466,
848
+ "step": 59000
849
+ },
850
+ {
851
+ "epoch": 1.3746101420815524,
852
+ "grad_norm": 36.90542221069336,
853
+ "learning_rate": 7.632427542399674e-06,
854
+ "loss": 0.3345,
855
+ "step": 59500
856
+ },
857
+ {
858
+ "epoch": 1.3861614878133302,
859
+ "grad_norm": 87.90083312988281,
860
+ "learning_rate": 7.608108831096823e-06,
861
+ "loss": 0.3268,
862
+ "step": 60000
863
+ },
864
+ {
865
+ "epoch": 1.397712833545108,
866
+ "grad_norm": 33.43013000488281,
867
+ "learning_rate": 7.583790119793973e-06,
868
+ "loss": 0.3572,
869
+ "step": 60500
870
+ },
871
+ {
872
+ "epoch": 1.4092641792768856,
873
+ "grad_norm": 10.33069896697998,
874
+ "learning_rate": 7.559471408491121e-06,
875
+ "loss": 0.3565,
876
+ "step": 61000
877
+ },
878
+ {
879
+ "epoch": 1.4208155250086634,
880
+ "grad_norm": 1.686011791229248,
881
+ "learning_rate": 7.535152697188272e-06,
882
+ "loss": 0.3647,
883
+ "step": 61500
884
+ },
885
+ {
886
+ "epoch": 1.4323668707404413,
887
+ "grad_norm": 20.76148223876953,
888
+ "learning_rate": 7.5108339858854206e-06,
889
+ "loss": 0.3284,
890
+ "step": 62000
891
+ },
892
+ {
893
+ "epoch": 1.443918216472219,
894
+ "grad_norm": 9.261433601379395,
895
+ "learning_rate": 7.48651527458257e-06,
896
+ "loss": 0.3474,
897
+ "step": 62500
898
+ },
899
+ {
900
+ "epoch": 1.4554695622039968,
901
+ "grad_norm": 38.97019577026367,
902
+ "learning_rate": 7.462196563279719e-06,
903
+ "loss": 0.3433,
904
+ "step": 63000
905
+ },
906
+ {
907
+ "epoch": 1.4670209079357746,
908
+ "grad_norm": 43.138450622558594,
909
+ "learning_rate": 7.437877851976869e-06,
910
+ "loss": 0.3406,
911
+ "step": 63500
912
+ },
913
+ {
914
+ "epoch": 1.4785722536675523,
915
+ "grad_norm": 10.66066837310791,
916
+ "learning_rate": 7.413559140674018e-06,
917
+ "loss": 0.3533,
918
+ "step": 64000
919
+ },
920
+ {
921
+ "epoch": 1.49012359939933,
922
+ "grad_norm": 32.289588928222656,
923
+ "learning_rate": 7.389240429371168e-06,
924
+ "loss": 0.3377,
925
+ "step": 64500
926
+ },
927
+ {
928
+ "epoch": 1.5016749451311078,
929
+ "grad_norm": 85.28985595703125,
930
+ "learning_rate": 7.364921718068316e-06,
931
+ "loss": 0.3598,
932
+ "step": 65000
933
+ },
934
+ {
935
+ "epoch": 1.5132262908628855,
936
+ "grad_norm": 56.37942123413086,
937
+ "learning_rate": 7.340603006765467e-06,
938
+ "loss": 0.3354,
939
+ "step": 65500
940
+ },
941
+ {
942
+ "epoch": 1.5247776365946633,
943
+ "grad_norm": 91.6646499633789,
944
+ "learning_rate": 7.3162842954626156e-06,
945
+ "loss": 0.3557,
946
+ "step": 66000
947
+ },
948
+ {
949
+ "epoch": 1.536328982326441,
950
+ "grad_norm": 22.282794952392578,
951
+ "learning_rate": 7.291965584159765e-06,
952
+ "loss": 0.3375,
953
+ "step": 66500
954
+ },
955
+ {
956
+ "epoch": 1.5478803280582187,
957
+ "grad_norm": 23.59638214111328,
958
+ "learning_rate": 7.267646872856914e-06,
959
+ "loss": 0.3613,
960
+ "step": 67000
961
+ },
962
+ {
963
+ "epoch": 1.5594316737899967,
964
+ "grad_norm": 101.47071838378906,
965
+ "learning_rate": 7.243328161554064e-06,
966
+ "loss": 0.3625,
967
+ "step": 67500
968
+ },
969
+ {
970
+ "epoch": 1.5709830195217744,
971
+ "grad_norm": 10.680140495300293,
972
+ "learning_rate": 7.219009450251213e-06,
973
+ "loss": 0.3447,
974
+ "step": 68000
975
+ },
976
+ {
977
+ "epoch": 1.5825343652535522,
978
+ "grad_norm": 3.526810646057129,
979
+ "learning_rate": 7.194690738948363e-06,
980
+ "loss": 0.3752,
981
+ "step": 68500
982
+ },
983
+ {
984
+ "epoch": 1.59408571098533,
985
+ "grad_norm": 63.79883575439453,
986
+ "learning_rate": 7.170372027645511e-06,
987
+ "loss": 0.3556,
988
+ "step": 69000
989
+ },
990
+ {
991
+ "epoch": 1.6056370567171077,
992
+ "grad_norm": 57.77899169921875,
993
+ "learning_rate": 7.14605331634266e-06,
994
+ "loss": 0.3734,
995
+ "step": 69500
996
+ },
997
+ {
998
+ "epoch": 1.6171884024488854,
999
+ "grad_norm": 12.811071395874023,
1000
+ "learning_rate": 7.1217346050398105e-06,
1001
+ "loss": 0.3314,
1002
+ "step": 70000
1003
+ },
1004
+ {
1005
+ "epoch": 1.6287397481806631,
1006
+ "grad_norm": 45.64273452758789,
1007
+ "learning_rate": 7.09741589373696e-06,
1008
+ "loss": 0.3603,
1009
+ "step": 70500
1010
+ },
1011
+ {
1012
+ "epoch": 1.6402910939124409,
1013
+ "grad_norm": 0.7724096775054932,
1014
+ "learning_rate": 7.073097182434109e-06,
1015
+ "loss": 0.3531,
1016
+ "step": 71000
1017
+ },
1018
+ {
1019
+ "epoch": 1.6518424396442186,
1020
+ "grad_norm": 111.21202850341797,
1021
+ "learning_rate": 7.048778471131258e-06,
1022
+ "loss": 0.3687,
1023
+ "step": 71500
1024
+ },
1025
+ {
1026
+ "epoch": 1.6633937853759964,
1027
+ "grad_norm": 2.9768288135528564,
1028
+ "learning_rate": 7.024459759828408e-06,
1029
+ "loss": 0.362,
1030
+ "step": 72000
1031
+ },
1032
+ {
1033
+ "epoch": 1.674945131107774,
1034
+ "grad_norm": 27.48809242248535,
1035
+ "learning_rate": 7.000141048525558e-06,
1036
+ "loss": 0.3616,
1037
+ "step": 72500
1038
+ },
1039
+ {
1040
+ "epoch": 1.6864964768395518,
1041
+ "grad_norm": 48.34687805175781,
1042
+ "learning_rate": 6.975822337222706e-06,
1043
+ "loss": 0.3367,
1044
+ "step": 73000
1045
+ },
1046
+ {
1047
+ "epoch": 1.6980478225713296,
1048
+ "grad_norm": 39.69007110595703,
1049
+ "learning_rate": 6.951503625919855e-06,
1050
+ "loss": 0.3621,
1051
+ "step": 73500
1052
+ },
1053
+ {
1054
+ "epoch": 1.7095991683031073,
1055
+ "grad_norm": 110.33565521240234,
1056
+ "learning_rate": 6.9271849146170055e-06,
1057
+ "loss": 0.3539,
1058
+ "step": 74000
1059
+ },
1060
+ {
1061
+ "epoch": 1.721150514034885,
1062
+ "grad_norm": 41.324222564697266,
1063
+ "learning_rate": 6.902866203314154e-06,
1064
+ "loss": 0.3433,
1065
+ "step": 74500
1066
+ },
1067
+ {
1068
+ "epoch": 1.7327018597666628,
1069
+ "grad_norm": 73.07483673095703,
1070
+ "learning_rate": 6.878547492011304e-06,
1071
+ "loss": 0.3581,
1072
+ "step": 75000
1073
+ },
1074
+ {
1075
+ "epoch": 1.7442532054984405,
1076
+ "grad_norm": 2.784154176712036,
1077
+ "learning_rate": 6.854228780708453e-06,
1078
+ "loss": 0.3506,
1079
+ "step": 75500
1080
+ },
1081
+ {
1082
+ "epoch": 1.7558045512302183,
1083
+ "grad_norm": 46.016876220703125,
1084
+ "learning_rate": 6.829910069405603e-06,
1085
+ "loss": 0.3687,
1086
+ "step": 76000
1087
+ },
1088
+ {
1089
+ "epoch": 1.767355896961996,
1090
+ "grad_norm": 10.431674003601074,
1091
+ "learning_rate": 6.805591358102752e-06,
1092
+ "loss": 0.3448,
1093
+ "step": 76500
1094
+ },
1095
+ {
1096
+ "epoch": 1.7789072426937738,
1097
+ "grad_norm": 77.33363342285156,
1098
+ "learning_rate": 6.781272646799901e-06,
1099
+ "loss": 0.3688,
1100
+ "step": 77000
1101
+ },
1102
+ {
1103
+ "epoch": 1.7904585884255515,
1104
+ "grad_norm": 36.162261962890625,
1105
+ "learning_rate": 6.75695393549705e-06,
1106
+ "loss": 0.3481,
1107
+ "step": 77500
1108
+ },
1109
+ {
1110
+ "epoch": 1.8020099341573292,
1111
+ "grad_norm": 34.184844970703125,
1112
+ "learning_rate": 6.7326352241942005e-06,
1113
+ "loss": 0.3538,
1114
+ "step": 78000
1115
+ },
1116
+ {
1117
+ "epoch": 1.813561279889107,
1118
+ "grad_norm": 89.07086944580078,
1119
+ "learning_rate": 6.708316512891349e-06,
1120
+ "loss": 0.3735,
1121
+ "step": 78500
1122
+ },
1123
+ {
1124
+ "epoch": 1.8251126256208847,
1125
+ "grad_norm": 3.9982004165649414,
1126
+ "learning_rate": 6.683997801588499e-06,
1127
+ "loss": 0.3394,
1128
+ "step": 79000
1129
+ },
1130
+ {
1131
+ "epoch": 1.8366639713526625,
1132
+ "grad_norm": 81.374267578125,
1133
+ "learning_rate": 6.659679090285648e-06,
1134
+ "loss": 0.3174,
1135
+ "step": 79500
1136
+ },
1137
+ {
1138
+ "epoch": 1.8482153170844402,
1139
+ "grad_norm": 0.20297826826572418,
1140
+ "learning_rate": 6.635360378982797e-06,
1141
+ "loss": 0.3153,
1142
+ "step": 80000
1143
+ },
1144
+ {
1145
+ "epoch": 1.859766662816218,
1146
+ "grad_norm": 59.6692008972168,
1147
+ "learning_rate": 6.611041667679947e-06,
1148
+ "loss": 0.3545,
1149
+ "step": 80500
1150
+ },
1151
+ {
1152
+ "epoch": 1.8713180085479957,
1153
+ "grad_norm": 1.2928632497787476,
1154
+ "learning_rate": 6.586722956377096e-06,
1155
+ "loss": 0.3373,
1156
+ "step": 81000
1157
+ },
1158
+ {
1159
+ "epoch": 1.8828693542797736,
1160
+ "grad_norm": 94.39313507080078,
1161
+ "learning_rate": 6.562404245074245e-06,
1162
+ "loss": 0.3222,
1163
+ "step": 81500
1164
+ },
1165
+ {
1166
+ "epoch": 1.8944207000115514,
1167
+ "grad_norm": 29.352657318115234,
1168
+ "learning_rate": 6.538085533771395e-06,
1169
+ "loss": 0.3418,
1170
+ "step": 82000
1171
+ },
1172
+ {
1173
+ "epoch": 1.9059720457433291,
1174
+ "grad_norm": 11.170536041259766,
1175
+ "learning_rate": 6.513766822468544e-06,
1176
+ "loss": 0.3423,
1177
+ "step": 82500
1178
+ },
1179
+ {
1180
+ "epoch": 1.9175233914751069,
1181
+ "grad_norm": 90.75904846191406,
1182
+ "learning_rate": 6.489448111165694e-06,
1183
+ "loss": 0.3253,
1184
+ "step": 83000
1185
+ },
1186
+ {
1187
+ "epoch": 1.9290747372068846,
1188
+ "grad_norm": 79.5523681640625,
1189
+ "learning_rate": 6.4651293998628426e-06,
1190
+ "loss": 0.3423,
1191
+ "step": 83500
1192
+ },
1193
+ {
1194
+ "epoch": 1.9406260829386623,
1195
+ "grad_norm": 2.362600564956665,
1196
+ "learning_rate": 6.440810688559992e-06,
1197
+ "loss": 0.3233,
1198
+ "step": 84000
1199
+ },
1200
+ {
1201
+ "epoch": 1.95217742867044,
1202
+ "grad_norm": 106.5050277709961,
1203
+ "learning_rate": 6.416491977257142e-06,
1204
+ "loss": 0.3291,
1205
+ "step": 84500
1206
+ },
1207
+ {
1208
+ "epoch": 1.9637287744022178,
1209
+ "grad_norm": 44.49625015258789,
1210
+ "learning_rate": 6.392173265954291e-06,
1211
+ "loss": 0.3462,
1212
+ "step": 85000
1213
+ },
1214
+ {
1215
+ "epoch": 1.9752801201339956,
1216
+ "grad_norm": 65.74842071533203,
1217
+ "learning_rate": 6.36785455465144e-06,
1218
+ "loss": 0.3356,
1219
+ "step": 85500
1220
+ },
1221
+ {
1222
+ "epoch": 1.9868314658657735,
1223
+ "grad_norm": 25.529869079589844,
1224
+ "learning_rate": 6.34353584334859e-06,
1225
+ "loss": 0.3418,
1226
+ "step": 86000
1227
+ },
1228
+ {
1229
+ "epoch": 1.9983828115975513,
1230
+ "grad_norm": 0.7819985151290894,
1231
+ "learning_rate": 6.319217132045739e-06,
1232
+ "loss": 0.3299,
1233
+ "step": 86500
1234
+ },
1235
+ {
1236
+ "epoch": 2.0,
1237
+ "eval_accuracy": 0.8750144391821647,
1238
+ "eval_f1": 0.883420247381804,
1239
+ "eval_loss": 0.5118392705917358,
1240
+ "eval_matthews_correlation": 0.7549403916374646,
1241
+ "eval_precision": 0.8371656115989381,
1242
+ "eval_recall": 0.9350850782354819,
1243
+ "eval_runtime": 543.2127,
1244
+ "eval_samples_per_second": 79.683,
1245
+ "eval_steps_per_second": 4.981,
1246
+ "step": 86570
1247
+ },
1248
+ {
1249
+ "epoch": 2.009934157329329,
1250
+ "grad_norm": 0.09307216852903366,
1251
+ "learning_rate": 6.294898420742889e-06,
1252
+ "loss": 0.2056,
1253
+ "step": 87000
1254
+ },
1255
+ {
1256
+ "epoch": 2.0214855030611067,
1257
+ "grad_norm": 0.1265561431646347,
1258
+ "learning_rate": 6.2705797094400376e-06,
1259
+ "loss": 0.201,
1260
+ "step": 87500
1261
+ },
1262
+ {
1263
+ "epoch": 2.0330368487928845,
1264
+ "grad_norm": 0.5366824865341187,
1265
+ "learning_rate": 6.246260998137187e-06,
1266
+ "loss": 0.1897,
1267
+ "step": 88000
1268
+ },
1269
+ {
1270
+ "epoch": 2.0445881945246622,
1271
+ "grad_norm": 37.82719421386719,
1272
+ "learning_rate": 6.221942286834337e-06,
1273
+ "loss": 0.2095,
1274
+ "step": 88500
1275
+ },
1276
+ {
1277
+ "epoch": 2.05613954025644,
1278
+ "grad_norm": 0.5017096996307373,
1279
+ "learning_rate": 6.197623575531486e-06,
1280
+ "loss": 0.1988,
1281
+ "step": 89000
1282
+ },
1283
+ {
1284
+ "epoch": 2.0676908859882177,
1285
+ "grad_norm": 0.10803969949483871,
1286
+ "learning_rate": 6.173304864228635e-06,
1287
+ "loss": 0.1991,
1288
+ "step": 89500
1289
+ },
1290
+ {
1291
+ "epoch": 2.0792422317199954,
1292
+ "grad_norm": 0.3513072431087494,
1293
+ "learning_rate": 6.148986152925785e-06,
1294
+ "loss": 0.2018,
1295
+ "step": 90000
1296
+ },
1297
+ {
1298
+ "epoch": 2.090793577451773,
1299
+ "grad_norm": 1.3080644607543945,
1300
+ "learning_rate": 6.124667441622933e-06,
1301
+ "loss": 0.2095,
1302
+ "step": 90500
1303
+ },
1304
+ {
1305
+ "epoch": 2.102344923183551,
1306
+ "grad_norm": 5.2385053634643555,
1307
+ "learning_rate": 6.100348730320084e-06,
1308
+ "loss": 0.1974,
1309
+ "step": 91000
1310
+ },
1311
+ {
1312
+ "epoch": 2.1138962689153287,
1313
+ "grad_norm": 10.532730102539062,
1314
+ "learning_rate": 6.0760300190172326e-06,
1315
+ "loss": 0.2019,
1316
+ "step": 91500
1317
+ },
1318
+ {
1319
+ "epoch": 2.1254476146471064,
1320
+ "grad_norm": 0.104009710252285,
1321
+ "learning_rate": 6.051711307714382e-06,
1322
+ "loss": 0.2091,
1323
+ "step": 92000
1324
+ },
1325
+ {
1326
+ "epoch": 2.136998960378884,
1327
+ "grad_norm": 0.038262490183115005,
1328
+ "learning_rate": 6.027392596411531e-06,
1329
+ "loss": 0.2422,
1330
+ "step": 92500
1331
+ },
1332
+ {
1333
+ "epoch": 2.148550306110662,
1334
+ "grad_norm": 0.30325543880462646,
1335
+ "learning_rate": 6.003073885108681e-06,
1336
+ "loss": 0.1971,
1337
+ "step": 93000
1338
+ },
1339
+ {
1340
+ "epoch": 2.1601016518424396,
1341
+ "grad_norm": 0.2501794099807739,
1342
+ "learning_rate": 5.97875517380583e-06,
1343
+ "loss": 0.2123,
1344
+ "step": 93500
1345
+ },
1346
+ {
1347
+ "epoch": 2.1716529975742174,
1348
+ "grad_norm": 0.9724676609039307,
1349
+ "learning_rate": 5.95443646250298e-06,
1350
+ "loss": 0.2222,
1351
+ "step": 94000
1352
+ },
1353
+ {
1354
+ "epoch": 2.183204343305995,
1355
+ "grad_norm": 23.507680892944336,
1356
+ "learning_rate": 5.930117751200128e-06,
1357
+ "loss": 0.199,
1358
+ "step": 94500
1359
+ },
1360
+ {
1361
+ "epoch": 2.194755689037773,
1362
+ "grad_norm": 23.95125389099121,
1363
+ "learning_rate": 5.905799039897279e-06,
1364
+ "loss": 0.2367,
1365
+ "step": 95000
1366
+ },
1367
+ {
1368
+ "epoch": 2.2063070347695506,
1369
+ "grad_norm": 0.026822537183761597,
1370
+ "learning_rate": 5.8814803285944275e-06,
1371
+ "loss": 0.1918,
1372
+ "step": 95500
1373
+ },
1374
+ {
1375
+ "epoch": 2.2178583805013283,
1376
+ "grad_norm": 0.11111404001712799,
1377
+ "learning_rate": 5.857161617291577e-06,
1378
+ "loss": 0.2047,
1379
+ "step": 96000
1380
+ },
1381
+ {
1382
+ "epoch": 2.229409726233106,
1383
+ "grad_norm": 0.1919553279876709,
1384
+ "learning_rate": 5.832842905988726e-06,
1385
+ "loss": 0.225,
1386
+ "step": 96500
1387
+ },
1388
+ {
1389
+ "epoch": 2.240961071964884,
1390
+ "grad_norm": 0.44445013999938965,
1391
+ "learning_rate": 5.808524194685876e-06,
1392
+ "loss": 0.2373,
1393
+ "step": 97000
1394
+ },
1395
+ {
1396
+ "epoch": 2.2525124176966616,
1397
+ "grad_norm": 0.0314050130546093,
1398
+ "learning_rate": 5.784205483383025e-06,
1399
+ "loss": 0.2313,
1400
+ "step": 97500
1401
+ },
1402
+ {
1403
+ "epoch": 2.2640637634284393,
1404
+ "grad_norm": 0.3491435945034027,
1405
+ "learning_rate": 5.759886772080175e-06,
1406
+ "loss": 0.2149,
1407
+ "step": 98000
1408
+ },
1409
+ {
1410
+ "epoch": 2.275615109160217,
1411
+ "grad_norm": 0.41699114441871643,
1412
+ "learning_rate": 5.735568060777323e-06,
1413
+ "loss": 0.2012,
1414
+ "step": 98500
1415
+ },
1416
+ {
1417
+ "epoch": 2.2871664548919948,
1418
+ "grad_norm": 0.05766846239566803,
1419
+ "learning_rate": 5.711249349474474e-06,
1420
+ "loss": 0.2247,
1421
+ "step": 99000
1422
+ },
1423
+ {
1424
+ "epoch": 2.2987178006237725,
1425
+ "grad_norm": 10.02811336517334,
1426
+ "learning_rate": 5.6869306381716225e-06,
1427
+ "loss": 0.2081,
1428
+ "step": 99500
1429
+ },
1430
+ {
1431
+ "epoch": 2.3102691463555503,
1432
+ "grad_norm": 0.1262376755475998,
1433
+ "learning_rate": 5.662611926868772e-06,
1434
+ "loss": 0.2471,
1435
+ "step": 100000
1436
+ },
1437
+ {
1438
+ "epoch": 2.321820492087328,
1439
+ "grad_norm": 7.784575939178467,
1440
+ "learning_rate": 5.638293215565921e-06,
1441
+ "loss": 0.2188,
1442
+ "step": 100500
1443
+ },
1444
+ {
1445
+ "epoch": 2.3333718378191057,
1446
+ "grad_norm": 0.4924183189868927,
1447
+ "learning_rate": 5.61397450426307e-06,
1448
+ "loss": 0.182,
1449
+ "step": 101000
1450
+ },
1451
+ {
1452
+ "epoch": 2.3449231835508835,
1453
+ "grad_norm": 0.8090869188308716,
1454
+ "learning_rate": 5.58965579296022e-06,
1455
+ "loss": 0.2307,
1456
+ "step": 101500
1457
+ },
1458
+ {
1459
+ "epoch": 2.3564745292826617,
1460
+ "grad_norm": 0.5244800448417664,
1461
+ "learning_rate": 5.56533708165737e-06,
1462
+ "loss": 0.2162,
1463
+ "step": 102000
1464
+ },
1465
+ {
1466
+ "epoch": 2.368025875014439,
1467
+ "grad_norm": 0.12117698788642883,
1468
+ "learning_rate": 5.541018370354518e-06,
1469
+ "loss": 0.2339,
1470
+ "step": 102500
1471
+ },
1472
+ {
1473
+ "epoch": 2.379577220746217,
1474
+ "grad_norm": 139.68356323242188,
1475
+ "learning_rate": 5.516699659051667e-06,
1476
+ "loss": 0.2306,
1477
+ "step": 103000
1478
+ },
1479
+ {
1480
+ "epoch": 2.391128566477995,
1481
+ "grad_norm": 0.34436774253845215,
1482
+ "learning_rate": 5.4923809477488175e-06,
1483
+ "loss": 0.2289,
1484
+ "step": 103500
1485
+ },
1486
+ {
1487
+ "epoch": 2.4026799122097726,
1488
+ "grad_norm": 0.06684353947639465,
1489
+ "learning_rate": 5.468062236445967e-06,
1490
+ "loss": 0.2388,
1491
+ "step": 104000
1492
+ },
1493
+ {
1494
+ "epoch": 2.4142312579415504,
1495
+ "grad_norm": 38.31502914428711,
1496
+ "learning_rate": 5.443743525143116e-06,
1497
+ "loss": 0.2185,
1498
+ "step": 104500
1499
+ },
1500
+ {
1501
+ "epoch": 2.425782603673328,
1502
+ "grad_norm": 0.47605860233306885,
1503
+ "learning_rate": 5.419424813840265e-06,
1504
+ "loss": 0.2536,
1505
+ "step": 105000
1506
+ },
1507
+ {
1508
+ "epoch": 2.437333949405106,
1509
+ "grad_norm": 2.486711025238037,
1510
+ "learning_rate": 5.395106102537415e-06,
1511
+ "loss": 0.224,
1512
+ "step": 105500
1513
+ },
1514
+ {
1515
+ "epoch": 2.4488852951368836,
1516
+ "grad_norm": 0.19100871682167053,
1517
+ "learning_rate": 5.370787391234565e-06,
1518
+ "loss": 0.2429,
1519
+ "step": 106000
1520
+ },
1521
+ {
1522
+ "epoch": 2.4604366408686613,
1523
+ "grad_norm": 0.6943984627723694,
1524
+ "learning_rate": 5.346468679931713e-06,
1525
+ "loss": 0.2051,
1526
+ "step": 106500
1527
+ },
1528
+ {
1529
+ "epoch": 2.471987986600439,
1530
+ "grad_norm": 100.54878234863281,
1531
+ "learning_rate": 5.322149968628862e-06,
1532
+ "loss": 0.2177,
1533
+ "step": 107000
1534
+ },
1535
+ {
1536
+ "epoch": 2.483539332332217,
1537
+ "grad_norm": 1.6368072032928467,
1538
+ "learning_rate": 5.2978312573260125e-06,
1539
+ "loss": 0.2182,
1540
+ "step": 107500
1541
+ },
1542
+ {
1543
+ "epoch": 2.4950906780639945,
1544
+ "grad_norm": 0.33522534370422363,
1545
+ "learning_rate": 5.273512546023162e-06,
1546
+ "loss": 0.2011,
1547
+ "step": 108000
1548
+ },
1549
+ {
1550
+ "epoch": 2.5066420237957723,
1551
+ "grad_norm": 1.7450274229049683,
1552
+ "learning_rate": 5.249193834720311e-06,
1553
+ "loss": 0.2316,
1554
+ "step": 108500
1555
+ },
1556
+ {
1557
+ "epoch": 2.51819336952755,
1558
+ "grad_norm": 0.29246997833251953,
1559
+ "learning_rate": 5.2248751234174596e-06,
1560
+ "loss": 0.2311,
1561
+ "step": 109000
1562
+ },
1563
+ {
1564
+ "epoch": 2.5297447152593278,
1565
+ "grad_norm": 0.0980878472328186,
1566
+ "learning_rate": 5.20055641211461e-06,
1567
+ "loss": 0.2224,
1568
+ "step": 109500
1569
+ },
1570
+ {
1571
+ "epoch": 2.5412960609911055,
1572
+ "grad_norm": 19.347719192504883,
1573
+ "learning_rate": 5.17623770081176e-06,
1574
+ "loss": 0.2156,
1575
+ "step": 110000
1576
+ },
1577
+ {
1578
+ "epoch": 2.5528474067228832,
1579
+ "grad_norm": 0.4555191099643707,
1580
+ "learning_rate": 5.151918989508908e-06,
1581
+ "loss": 0.2192,
1582
+ "step": 110500
1583
+ },
1584
+ {
1585
+ "epoch": 2.564398752454661,
1586
+ "grad_norm": 8.327524185180664,
1587
+ "learning_rate": 5.127600278206057e-06,
1588
+ "loss": 0.2331,
1589
+ "step": 111000
1590
+ },
1591
+ {
1592
+ "epoch": 2.5759500981864387,
1593
+ "grad_norm": 0.23247091472148895,
1594
+ "learning_rate": 5.103281566903207e-06,
1595
+ "loss": 0.2068,
1596
+ "step": 111500
1597
+ },
1598
+ {
1599
+ "epoch": 2.5875014439182165,
1600
+ "grad_norm": 0.5253803730010986,
1601
+ "learning_rate": 5.078962855600357e-06,
1602
+ "loss": 0.2199,
1603
+ "step": 112000
1604
+ },
1605
+ {
1606
+ "epoch": 2.599052789649994,
1607
+ "grad_norm": 0.16480697691440582,
1608
+ "learning_rate": 5.054644144297506e-06,
1609
+ "loss": 0.2303,
1610
+ "step": 112500
1611
+ },
1612
+ {
1613
+ "epoch": 2.610604135381772,
1614
+ "grad_norm": 13.176816940307617,
1615
+ "learning_rate": 5.0303254329946546e-06,
1616
+ "loss": 0.2463,
1617
+ "step": 113000
1618
+ },
1619
+ {
1620
+ "epoch": 2.6221554811135497,
1621
+ "grad_norm": 0.30696120858192444,
1622
+ "learning_rate": 5.006006721691804e-06,
1623
+ "loss": 0.229,
1624
+ "step": 113500
1625
+ },
1626
+ {
1627
+ "epoch": 2.6337068268453274,
1628
+ "grad_norm": 10.747370719909668,
1629
+ "learning_rate": 4.981688010388954e-06,
1630
+ "loss": 0.239,
1631
+ "step": 114000
1632
+ },
1633
+ {
1634
+ "epoch": 2.645258172577105,
1635
+ "grad_norm": 10.800712585449219,
1636
+ "learning_rate": 4.957369299086103e-06,
1637
+ "loss": 0.2464,
1638
+ "step": 114500
1639
+ },
1640
+ {
1641
+ "epoch": 2.656809518308883,
1642
+ "grad_norm": 137.77926635742188,
1643
+ "learning_rate": 4.933050587783252e-06,
1644
+ "loss": 0.2082,
1645
+ "step": 115000
1646
+ },
1647
+ {
1648
+ "epoch": 2.6683608640406606,
1649
+ "grad_norm": 47.672916412353516,
1650
+ "learning_rate": 4.908731876480402e-06,
1651
+ "loss": 0.2169,
1652
+ "step": 115500
1653
+ },
1654
+ {
1655
+ "epoch": 2.6799122097724384,
1656
+ "grad_norm": 13.38288402557373,
1657
+ "learning_rate": 4.884413165177551e-06,
1658
+ "loss": 0.2316,
1659
+ "step": 116000
1660
+ },
1661
+ {
1662
+ "epoch": 2.691463555504216,
1663
+ "grad_norm": 1.015547275543213,
1664
+ "learning_rate": 4.860094453874701e-06,
1665
+ "loss": 0.2129,
1666
+ "step": 116500
1667
+ },
1668
+ {
1669
+ "epoch": 2.703014901235994,
1670
+ "grad_norm": 0.6633150577545166,
1671
+ "learning_rate": 4.8357757425718495e-06,
1672
+ "loss": 0.2346,
1673
+ "step": 117000
1674
+ },
1675
+ {
1676
+ "epoch": 2.7145662469677716,
1677
+ "grad_norm": 0.4231832027435303,
1678
+ "learning_rate": 4.811457031268999e-06,
1679
+ "loss": 0.2465,
1680
+ "step": 117500
1681
+ },
1682
+ {
1683
+ "epoch": 2.7261175926995493,
1684
+ "grad_norm": 0.1299211084842682,
1685
+ "learning_rate": 4.787138319966149e-06,
1686
+ "loss": 0.2395,
1687
+ "step": 118000
1688
+ },
1689
+ {
1690
+ "epoch": 2.7376689384313275,
1691
+ "grad_norm": 0.05142102763056755,
1692
+ "learning_rate": 4.762819608663298e-06,
1693
+ "loss": 0.2327,
1694
+ "step": 118500
1695
+ },
1696
+ {
1697
+ "epoch": 2.749220284163105,
1698
+ "grad_norm": 0.11573930829763412,
1699
+ "learning_rate": 4.738500897360447e-06,
1700
+ "loss": 0.2035,
1701
+ "step": 119000
1702
+ },
1703
+ {
1704
+ "epoch": 2.760771629894883,
1705
+ "grad_norm": 0.5103738307952881,
1706
+ "learning_rate": 4.714182186057597e-06,
1707
+ "loss": 0.2195,
1708
+ "step": 119500
1709
+ },
1710
+ {
1711
+ "epoch": 2.7723229756266603,
1712
+ "grad_norm": 0.45950281620025635,
1713
+ "learning_rate": 4.689863474754746e-06,
1714
+ "loss": 0.2523,
1715
+ "step": 120000
1716
+ },
1717
+ {
1718
+ "epoch": 2.7838743213584385,
1719
+ "grad_norm": 0.3899378180503845,
1720
+ "learning_rate": 4.665544763451896e-06,
1721
+ "loss": 0.2071,
1722
+ "step": 120500
1723
+ },
1724
+ {
1725
+ "epoch": 2.795425667090216,
1726
+ "grad_norm": 6.975862503051758,
1727
+ "learning_rate": 4.6412260521490445e-06,
1728
+ "loss": 0.2541,
1729
+ "step": 121000
1730
+ },
1731
+ {
1732
+ "epoch": 2.806977012821994,
1733
+ "grad_norm": 34.19971466064453,
1734
+ "learning_rate": 4.616907340846194e-06,
1735
+ "loss": 0.2136,
1736
+ "step": 121500
1737
+ },
1738
+ {
1739
+ "epoch": 2.8185283585537713,
1740
+ "grad_norm": 0.10601510852575302,
1741
+ "learning_rate": 4.592588629543344e-06,
1742
+ "loss": 0.2143,
1743
+ "step": 122000
1744
+ },
1745
+ {
1746
+ "epoch": 2.8300797042855494,
1747
+ "grad_norm": 7.919482231140137,
1748
+ "learning_rate": 4.5682699182404924e-06,
1749
+ "loss": 0.2346,
1750
+ "step": 122500
1751
+ },
1752
+ {
1753
+ "epoch": 2.8416310500173267,
1754
+ "grad_norm": 0.47603341937065125,
1755
+ "learning_rate": 4.543951206937642e-06,
1756
+ "loss": 0.2185,
1757
+ "step": 123000
1758
+ },
1759
+ {
1760
+ "epoch": 2.853182395749105,
1761
+ "grad_norm": 26.099021911621094,
1762
+ "learning_rate": 4.519632495634792e-06,
1763
+ "loss": 0.2307,
1764
+ "step": 123500
1765
+ },
1766
+ {
1767
+ "epoch": 2.8647337414808827,
1768
+ "grad_norm": 0.5455193519592285,
1769
+ "learning_rate": 4.495313784331941e-06,
1770
+ "loss": 0.2057,
1771
+ "step": 124000
1772
+ },
1773
+ {
1774
+ "epoch": 2.8762850872126604,
1775
+ "grad_norm": 2.6065433025360107,
1776
+ "learning_rate": 4.47099507302909e-06,
1777
+ "loss": 0.2273,
1778
+ "step": 124500
1779
+ },
1780
+ {
1781
+ "epoch": 2.887836432944438,
1782
+ "grad_norm": 0.44194814562797546,
1783
+ "learning_rate": 4.4466763617262395e-06,
1784
+ "loss": 0.2237,
1785
+ "step": 125000
1786
+ },
1787
+ {
1788
+ "epoch": 2.899387778676216,
1789
+ "grad_norm": 2.659928321838379,
1790
+ "learning_rate": 4.422357650423389e-06,
1791
+ "loss": 0.2162,
1792
+ "step": 125500
1793
+ },
1794
+ {
1795
+ "epoch": 2.9109391244079936,
1796
+ "grad_norm": 1.6970124244689941,
1797
+ "learning_rate": 4.398038939120539e-06,
1798
+ "loss": 0.2316,
1799
+ "step": 126000
1800
+ },
1801
+ {
1802
+ "epoch": 2.9224904701397714,
1803
+ "grad_norm": 103.99176025390625,
1804
+ "learning_rate": 4.3737202278176874e-06,
1805
+ "loss": 0.2209,
1806
+ "step": 126500
1807
+ },
1808
+ {
1809
+ "epoch": 2.934041815871549,
1810
+ "grad_norm": 0.775031328201294,
1811
+ "learning_rate": 4.349401516514837e-06,
1812
+ "loss": 0.2073,
1813
+ "step": 127000
1814
+ },
1815
+ {
1816
+ "epoch": 2.945593161603327,
1817
+ "grad_norm": 0.6462440490722656,
1818
+ "learning_rate": 4.325082805211987e-06,
1819
+ "loss": 0.2185,
1820
+ "step": 127500
1821
+ },
1822
+ {
1823
+ "epoch": 2.9571445073351046,
1824
+ "grad_norm": 2.127567768096924,
1825
+ "learning_rate": 4.300764093909136e-06,
1826
+ "loss": 0.2188,
1827
+ "step": 128000
1828
+ },
1829
+ {
1830
+ "epoch": 2.9686958530668823,
1831
+ "grad_norm": 0.3134165406227112,
1832
+ "learning_rate": 4.276445382606285e-06,
1833
+ "loss": 0.2495,
1834
+ "step": 128500
1835
+ },
1836
+ {
1837
+ "epoch": 2.98024719879866,
1838
+ "grad_norm": 0.4288942515850067,
1839
+ "learning_rate": 4.2521266713034345e-06,
1840
+ "loss": 0.2171,
1841
+ "step": 129000
1842
+ },
1843
+ {
1844
+ "epoch": 2.991798544530438,
1845
+ "grad_norm": 51.89253234863281,
1846
+ "learning_rate": 4.227807960000584e-06,
1847
+ "loss": 0.1999,
1848
+ "step": 129500
1849
+ },
1850
+ {
1851
+ "epoch": 3.0,
1852
+ "eval_accuracy": 0.887628508721266,
1853
+ "eval_f1": 0.8962368802798873,
1854
+ "eval_loss": 0.683422863483429,
1855
+ "eval_matthews_correlation": 0.7825720644338475,
1856
+ "eval_precision": 0.8417551592867161,
1857
+ "eval_recall": 0.9582592035034898,
1858
+ "eval_runtime": 541.3245,
1859
+ "eval_samples_per_second": 79.961,
1860
+ "eval_steps_per_second": 4.999,
1861
+ "step": 129855
1862
+ },
1863
+ {
1864
+ "epoch": 3.0033498902622155,
1865
+ "grad_norm": 8.645707130432129,
1866
+ "learning_rate": 4.203489248697734e-06,
1867
+ "loss": 0.2205,
1868
+ "step": 130000
1869
+ },
1870
+ {
1871
+ "epoch": 3.0149012359939933,
1872
+ "grad_norm": 0.025636987760663033,
1873
+ "learning_rate": 4.179170537394882e-06,
1874
+ "loss": 0.1589,
1875
+ "step": 130500
1876
+ },
1877
+ {
1878
+ "epoch": 3.026452581725771,
1879
+ "grad_norm": 9.634745597839355,
1880
+ "learning_rate": 4.154851826092032e-06,
1881
+ "loss": 0.1591,
1882
+ "step": 131000
1883
+ },
1884
+ {
1885
+ "epoch": 3.0380039274575488,
1886
+ "grad_norm": 0.9399522542953491,
1887
+ "learning_rate": 4.130533114789182e-06,
1888
+ "loss": 0.1473,
1889
+ "step": 131500
1890
+ },
1891
+ {
1892
+ "epoch": 3.0495552731893265,
1893
+ "grad_norm": 0.17956668138504028,
1894
+ "learning_rate": 4.106214403486331e-06,
1895
+ "loss": 0.1727,
1896
+ "step": 132000
1897
+ },
1898
+ {
1899
+ "epoch": 3.0611066189211042,
1900
+ "grad_norm": 36.57072067260742,
1901
+ "learning_rate": 4.08189569218348e-06,
1902
+ "loss": 0.1547,
1903
+ "step": 132500
1904
+ },
1905
+ {
1906
+ "epoch": 3.072657964652882,
1907
+ "grad_norm": 0.30588293075561523,
1908
+ "learning_rate": 4.0575769808806295e-06,
1909
+ "loss": 0.1727,
1910
+ "step": 133000
1911
+ },
1912
+ {
1913
+ "epoch": 3.0842093103846597,
1914
+ "grad_norm": 2.21580171585083,
1915
+ "learning_rate": 4.033258269577779e-06,
1916
+ "loss": 0.1519,
1917
+ "step": 133500
1918
+ },
1919
+ {
1920
+ "epoch": 3.0957606561164375,
1921
+ "grad_norm": 35.42125701904297,
1922
+ "learning_rate": 4.008939558274928e-06,
1923
+ "loss": 0.162,
1924
+ "step": 134000
1925
+ },
1926
+ {
1927
+ "epoch": 3.107312001848215,
1928
+ "grad_norm": 19.89476776123047,
1929
+ "learning_rate": 3.984620846972077e-06,
1930
+ "loss": 0.1489,
1931
+ "step": 134500
1932
+ },
1933
+ {
1934
+ "epoch": 3.118863347579993,
1935
+ "grad_norm": 0.5846690535545349,
1936
+ "learning_rate": 3.960302135669227e-06,
1937
+ "loss": 0.1838,
1938
+ "step": 135000
1939
+ },
1940
+ {
1941
+ "epoch": 3.1304146933117707,
1942
+ "grad_norm": 33.746910095214844,
1943
+ "learning_rate": 3.935983424366377e-06,
1944
+ "loss": 0.1657,
1945
+ "step": 135500
1946
+ },
1947
+ {
1948
+ "epoch": 3.1419660390435484,
1949
+ "grad_norm": 0.1903204321861267,
1950
+ "learning_rate": 3.911664713063525e-06,
1951
+ "loss": 0.1431,
1952
+ "step": 136000
1953
+ },
1954
+ {
1955
+ "epoch": 3.153517384775326,
1956
+ "grad_norm": 1.768623948097229,
1957
+ "learning_rate": 3.887346001760675e-06,
1958
+ "loss": 0.157,
1959
+ "step": 136500
1960
+ },
1961
+ {
1962
+ "epoch": 3.165068730507104,
1963
+ "grad_norm": 9.588990211486816,
1964
+ "learning_rate": 3.8630272904578245e-06,
1965
+ "loss": 0.1508,
1966
+ "step": 137000
1967
+ },
1968
+ {
1969
+ "epoch": 3.1766200762388817,
1970
+ "grad_norm": 0.5915334820747375,
1971
+ "learning_rate": 3.838708579154974e-06,
1972
+ "loss": 0.1769,
1973
+ "step": 137500
1974
+ },
1975
+ {
1976
+ "epoch": 3.1881714219706594,
1977
+ "grad_norm": 8.681124687194824,
1978
+ "learning_rate": 3.814389867852123e-06,
1979
+ "loss": 0.1802,
1980
+ "step": 138000
1981
+ },
1982
+ {
1983
+ "epoch": 3.199722767702437,
1984
+ "grad_norm": 9.283745765686035,
1985
+ "learning_rate": 3.7900711565492724e-06,
1986
+ "loss": 0.1584,
1987
+ "step": 138500
1988
+ },
1989
+ {
1990
+ "epoch": 3.2112741134342153,
1991
+ "grad_norm": 0.6678259968757629,
1992
+ "learning_rate": 3.7657524452464216e-06,
1993
+ "loss": 0.1554,
1994
+ "step": 139000
1995
+ },
1996
+ {
1997
+ "epoch": 3.2228254591659926,
1998
+ "grad_norm": 0.31929489970207214,
1999
+ "learning_rate": 3.741433733943571e-06,
2000
+ "loss": 0.1702,
2001
+ "step": 139500
2002
+ },
2003
+ {
2004
+ "epoch": 3.234376804897771,
2005
+ "grad_norm": 2.6308133602142334,
2006
+ "learning_rate": 3.7171150226407203e-06,
2007
+ "loss": 0.1597,
2008
+ "step": 140000
2009
+ },
2010
+ {
2011
+ "epoch": 3.2459281506295485,
2012
+ "grad_norm": 0.37557047605514526,
2013
+ "learning_rate": 3.69279631133787e-06,
2014
+ "loss": 0.1542,
2015
+ "step": 140500
2016
+ },
2017
+ {
2018
+ "epoch": 3.2574794963613263,
2019
+ "grad_norm": 0.47243937849998474,
2020
+ "learning_rate": 3.668477600035019e-06,
2021
+ "loss": 0.1595,
2022
+ "step": 141000
2023
+ },
2024
+ {
2025
+ "epoch": 3.269030842093104,
2026
+ "grad_norm": 0.026057422161102295,
2027
+ "learning_rate": 3.6441588887321686e-06,
2028
+ "loss": 0.1917,
2029
+ "step": 141500
2030
+ },
2031
+ {
2032
+ "epoch": 3.2805821878248818,
2033
+ "grad_norm": 0.18791376054286957,
2034
+ "learning_rate": 3.619840177429318e-06,
2035
+ "loss": 0.1325,
2036
+ "step": 142000
2037
+ },
2038
+ {
2039
+ "epoch": 3.2921335335566595,
2040
+ "grad_norm": 0.5374308824539185,
2041
+ "learning_rate": 3.5955214661264674e-06,
2042
+ "loss": 0.1703,
2043
+ "step": 142500
2044
+ },
2045
+ {
2046
+ "epoch": 3.3036848792884372,
2047
+ "grad_norm": 0.0012748383451253176,
2048
+ "learning_rate": 3.5712027548236165e-06,
2049
+ "loss": 0.1509,
2050
+ "step": 143000
2051
+ },
2052
+ {
2053
+ "epoch": 3.315236225020215,
2054
+ "grad_norm": 0.19805267453193665,
2055
+ "learning_rate": 3.5468840435207657e-06,
2056
+ "loss": 0.1437,
2057
+ "step": 143500
2058
+ },
2059
+ {
2060
+ "epoch": 3.3267875707519927,
2061
+ "grad_norm": 15.450338363647461,
2062
+ "learning_rate": 3.5225653322179153e-06,
2063
+ "loss": 0.1741,
2064
+ "step": 144000
2065
+ },
2066
+ {
2067
+ "epoch": 3.3383389164837705,
2068
+ "grad_norm": 0.3308798670768738,
2069
+ "learning_rate": 3.4982466209150645e-06,
2070
+ "loss": 0.1392,
2071
+ "step": 144500
2072
+ },
2073
+ {
2074
+ "epoch": 3.349890262215548,
2075
+ "grad_norm": 21.04158592224121,
2076
+ "learning_rate": 3.473927909612214e-06,
2077
+ "loss": 0.1704,
2078
+ "step": 145000
2079
+ },
2080
+ {
2081
+ "epoch": 3.361441607947326,
2082
+ "grad_norm": 8.919427871704102,
2083
+ "learning_rate": 3.449609198309363e-06,
2084
+ "loss": 0.158,
2085
+ "step": 145500
2086
+ },
2087
+ {
2088
+ "epoch": 3.3729929536791037,
2089
+ "grad_norm": 0.04234965518116951,
2090
+ "learning_rate": 3.425290487006513e-06,
2091
+ "loss": 0.1676,
2092
+ "step": 146000
2093
+ },
2094
+ {
2095
+ "epoch": 3.3845442994108814,
2096
+ "grad_norm": 0.7735577821731567,
2097
+ "learning_rate": 3.400971775703662e-06,
2098
+ "loss": 0.1717,
2099
+ "step": 146500
2100
+ },
2101
+ {
2102
+ "epoch": 3.396095645142659,
2103
+ "grad_norm": 0.0031218251679092646,
2104
+ "learning_rate": 3.3766530644008115e-06,
2105
+ "loss": 0.1852,
2106
+ "step": 147000
2107
+ },
2108
+ {
2109
+ "epoch": 3.407646990874437,
2110
+ "grad_norm": 0.2701342701911926,
2111
+ "learning_rate": 3.3523343530979607e-06,
2112
+ "loss": 0.178,
2113
+ "step": 147500
2114
+ },
2115
+ {
2116
+ "epoch": 3.4191983366062146,
2117
+ "grad_norm": 14.016953468322754,
2118
+ "learning_rate": 3.3280156417951103e-06,
2119
+ "loss": 0.1832,
2120
+ "step": 148000
2121
+ },
2122
+ {
2123
+ "epoch": 3.4307496823379924,
2124
+ "grad_norm": 0.002833212958648801,
2125
+ "learning_rate": 3.3036969304922594e-06,
2126
+ "loss": 0.1431,
2127
+ "step": 148500
2128
+ },
2129
+ {
2130
+ "epoch": 3.44230102806977,
2131
+ "grad_norm": 0.38238525390625,
2132
+ "learning_rate": 3.279378219189409e-06,
2133
+ "loss": 0.1736,
2134
+ "step": 149000
2135
+ },
2136
+ {
2137
+ "epoch": 3.453852373801548,
2138
+ "grad_norm": 0.019360702484846115,
2139
+ "learning_rate": 3.255059507886558e-06,
2140
+ "loss": 0.1589,
2141
+ "step": 149500
2142
+ },
2143
+ {
2144
+ "epoch": 3.4654037195333256,
2145
+ "grad_norm": 316.6242370605469,
2146
+ "learning_rate": 3.2307407965837078e-06,
2147
+ "loss": 0.1813,
2148
+ "step": 150000
2149
+ },
2150
+ {
2151
+ "epoch": 3.4769550652651033,
2152
+ "grad_norm": 0.6166911125183105,
2153
+ "learning_rate": 3.206422085280857e-06,
2154
+ "loss": 0.1562,
2155
+ "step": 150500
2156
+ },
2157
+ {
2158
+ "epoch": 3.488506410996881,
2159
+ "grad_norm": 0.12754186987876892,
2160
+ "learning_rate": 3.1821033739780065e-06,
2161
+ "loss": 0.1826,
2162
+ "step": 151000
2163
+ },
2164
+ {
2165
+ "epoch": 3.500057756728659,
2166
+ "grad_norm": 1.4088376760482788,
2167
+ "learning_rate": 3.1577846626751557e-06,
2168
+ "loss": 0.1751,
2169
+ "step": 151500
2170
+ },
2171
+ {
2172
+ "epoch": 3.5116091024604366,
2173
+ "grad_norm": 0.6573951840400696,
2174
+ "learning_rate": 3.1334659513723053e-06,
2175
+ "loss": 0.1612,
2176
+ "step": 152000
2177
+ },
2178
+ {
2179
+ "epoch": 3.5231604481922143,
2180
+ "grad_norm": 0.6012342572212219,
2181
+ "learning_rate": 3.1091472400694544e-06,
2182
+ "loss": 0.1734,
2183
+ "step": 152500
2184
+ },
2185
+ {
2186
+ "epoch": 3.534711793923992,
2187
+ "grad_norm": 0.6947406530380249,
2188
+ "learning_rate": 3.084828528766604e-06,
2189
+ "loss": 0.1574,
2190
+ "step": 153000
2191
+ },
2192
+ {
2193
+ "epoch": 3.54626313965577,
2194
+ "grad_norm": 0.49752572178840637,
2195
+ "learning_rate": 3.060509817463753e-06,
2196
+ "loss": 0.142,
2197
+ "step": 153500
2198
+ },
2199
+ {
2200
+ "epoch": 3.5578144853875475,
2201
+ "grad_norm": 0.6474779844284058,
2202
+ "learning_rate": 3.0361911061609028e-06,
2203
+ "loss": 0.1511,
2204
+ "step": 154000
2205
+ },
2206
+ {
2207
+ "epoch": 3.5693658311193253,
2208
+ "grad_norm": 1.3100844621658325,
2209
+ "learning_rate": 3.011872394858052e-06,
2210
+ "loss": 0.1652,
2211
+ "step": 154500
2212
+ },
2213
+ {
2214
+ "epoch": 3.580917176851103,
2215
+ "grad_norm": 0.9856226444244385,
2216
+ "learning_rate": 2.987553683555201e-06,
2217
+ "loss": 0.1884,
2218
+ "step": 155000
2219
+ },
2220
+ {
2221
+ "epoch": 3.5924685225828807,
2222
+ "grad_norm": 73.24075317382812,
2223
+ "learning_rate": 2.9632349722523507e-06,
2224
+ "loss": 0.1629,
2225
+ "step": 155500
2226
+ },
2227
+ {
2228
+ "epoch": 3.6040198683146585,
2229
+ "grad_norm": 8.039386749267578,
2230
+ "learning_rate": 2.9389162609495e-06,
2231
+ "loss": 0.1628,
2232
+ "step": 156000
2233
+ },
2234
+ {
2235
+ "epoch": 3.6155712140464367,
2236
+ "grad_norm": 0.0022829582449048758,
2237
+ "learning_rate": 2.9145975496466494e-06,
2238
+ "loss": 0.1532,
2239
+ "step": 156500
2240
+ },
2241
+ {
2242
+ "epoch": 3.627122559778214,
2243
+ "grad_norm": 0.43138188123703003,
2244
+ "learning_rate": 2.8902788383437986e-06,
2245
+ "loss": 0.1711,
2246
+ "step": 157000
2247
+ },
2248
+ {
2249
+ "epoch": 3.638673905509992,
2250
+ "grad_norm": 7.881419658660889,
2251
+ "learning_rate": 2.865960127040948e-06,
2252
+ "loss": 0.155,
2253
+ "step": 157500
2254
+ },
2255
+ {
2256
+ "epoch": 3.6502252512417694,
2257
+ "grad_norm": 117.18584442138672,
2258
+ "learning_rate": 2.8416414157380973e-06,
2259
+ "loss": 0.155,
2260
+ "step": 158000
2261
+ },
2262
+ {
2263
+ "epoch": 3.6617765969735476,
2264
+ "grad_norm": 0.0010274089872837067,
2265
+ "learning_rate": 2.817322704435247e-06,
2266
+ "loss": 0.1717,
2267
+ "step": 158500
2268
+ },
2269
+ {
2270
+ "epoch": 3.673327942705325,
2271
+ "grad_norm": 109.89828491210938,
2272
+ "learning_rate": 2.793003993132396e-06,
2273
+ "loss": 0.1608,
2274
+ "step": 159000
2275
+ },
2276
+ {
2277
+ "epoch": 3.684879288437103,
2278
+ "grad_norm": 0.7722026109695435,
2279
+ "learning_rate": 2.7686852818295457e-06,
2280
+ "loss": 0.1789,
2281
+ "step": 159500
2282
+ },
2283
+ {
2284
+ "epoch": 3.6964306341688804,
2285
+ "grad_norm": 0.0013100790092721581,
2286
+ "learning_rate": 2.744366570526695e-06,
2287
+ "loss": 0.1601,
2288
+ "step": 160000
2289
+ },
2290
+ {
2291
+ "epoch": 3.7079819799006586,
2292
+ "grad_norm": 0.25156673789024353,
2293
+ "learning_rate": 2.7200478592238444e-06,
2294
+ "loss": 0.1588,
2295
+ "step": 160500
2296
+ },
2297
+ {
2298
+ "epoch": 3.7195333256324363,
2299
+ "grad_norm": 0.18242871761322021,
2300
+ "learning_rate": 2.6957291479209936e-06,
2301
+ "loss": 0.1626,
2302
+ "step": 161000
2303
+ },
2304
+ {
2305
+ "epoch": 3.731084671364214,
2306
+ "grad_norm": 0.008753698319196701,
2307
+ "learning_rate": 2.671410436618143e-06,
2308
+ "loss": 0.1654,
2309
+ "step": 161500
2310
+ },
2311
+ {
2312
+ "epoch": 3.742636017095992,
2313
+ "grad_norm": 0.6226741671562195,
2314
+ "learning_rate": 2.6470917253152923e-06,
2315
+ "loss": 0.174,
2316
+ "step": 162000
2317
+ },
2318
+ {
2319
+ "epoch": 3.7541873628277695,
2320
+ "grad_norm": 12.760992050170898,
2321
+ "learning_rate": 2.622773014012442e-06,
2322
+ "loss": 0.1656,
2323
+ "step": 162500
2324
+ },
2325
+ {
2326
+ "epoch": 3.7657387085595473,
2327
+ "grad_norm": 16.012075424194336,
2328
+ "learning_rate": 2.598454302709591e-06,
2329
+ "loss": 0.15,
2330
+ "step": 163000
2331
+ },
2332
+ {
2333
+ "epoch": 3.777290054291325,
2334
+ "grad_norm": 7.900235652923584,
2335
+ "learning_rate": 2.5741355914067407e-06,
2336
+ "loss": 0.1705,
2337
+ "step": 163500
2338
+ },
2339
+ {
2340
+ "epoch": 3.7888414000231028,
2341
+ "grad_norm": 2.0820932388305664,
2342
+ "learning_rate": 2.54981688010389e-06,
2343
+ "loss": 0.1752,
2344
+ "step": 164000
2345
+ },
2346
+ {
2347
+ "epoch": 3.8003927457548805,
2348
+ "grad_norm": 1.189961552619934,
2349
+ "learning_rate": 2.5254981688010394e-06,
2350
+ "loss": 0.1578,
2351
+ "step": 164500
2352
+ },
2353
+ {
2354
+ "epoch": 3.8119440914866582,
2355
+ "grad_norm": 0.005685470532625914,
2356
+ "learning_rate": 2.5011794574981886e-06,
2357
+ "loss": 0.147,
2358
+ "step": 165000
2359
+ },
2360
+ {
2361
+ "epoch": 3.823495437218436,
2362
+ "grad_norm": 0.33695000410079956,
2363
+ "learning_rate": 2.4768607461953377e-06,
2364
+ "loss": 0.1835,
2365
+ "step": 165500
2366
+ },
2367
+ {
2368
+ "epoch": 3.8350467829502137,
2369
+ "grad_norm": 18.306541442871094,
2370
+ "learning_rate": 2.4525420348924873e-06,
2371
+ "loss": 0.1963,
2372
+ "step": 166000
2373
+ },
2374
+ {
2375
+ "epoch": 3.8465981286819915,
2376
+ "grad_norm": 0.6380665302276611,
2377
+ "learning_rate": 2.4282233235896365e-06,
2378
+ "loss": 0.1696,
2379
+ "step": 166500
2380
+ },
2381
+ {
2382
+ "epoch": 3.858149474413769,
2383
+ "grad_norm": 0.31257039308547974,
2384
+ "learning_rate": 2.403904612286786e-06,
2385
+ "loss": 0.166,
2386
+ "step": 167000
2387
+ },
2388
+ {
2389
+ "epoch": 3.869700820145547,
2390
+ "grad_norm": 0.09384506940841675,
2391
+ "learning_rate": 2.3795859009839352e-06,
2392
+ "loss": 0.1734,
2393
+ "step": 167500
2394
+ },
2395
+ {
2396
+ "epoch": 3.8812521658773247,
2397
+ "grad_norm": 0.12250012159347534,
2398
+ "learning_rate": 2.355267189681085e-06,
2399
+ "loss": 0.1705,
2400
+ "step": 168000
2401
+ },
2402
+ {
2403
+ "epoch": 3.8928035116091024,
2404
+ "grad_norm": 0.3609682619571686,
2405
+ "learning_rate": 2.330948478378234e-06,
2406
+ "loss": 0.1548,
2407
+ "step": 168500
2408
+ },
2409
+ {
2410
+ "epoch": 3.90435485734088,
2411
+ "grad_norm": 9.560044288635254,
2412
+ "learning_rate": 2.3066297670753836e-06,
2413
+ "loss": 0.1468,
2414
+ "step": 169000
2415
+ },
2416
+ {
2417
+ "epoch": 3.915906203072658,
2418
+ "grad_norm": 1.053947925567627,
2419
+ "learning_rate": 2.2823110557725327e-06,
2420
+ "loss": 0.1686,
2421
+ "step": 169500
2422
+ },
2423
+ {
2424
+ "epoch": 3.9274575488044356,
2425
+ "grad_norm": 0.032445378601551056,
2426
+ "learning_rate": 2.2579923444696823e-06,
2427
+ "loss": 0.1487,
2428
+ "step": 170000
2429
+ },
2430
+ {
2431
+ "epoch": 3.9390088945362134,
2432
+ "grad_norm": 8.668307304382324,
2433
+ "learning_rate": 2.2336736331668315e-06,
2434
+ "loss": 0.1791,
2435
+ "step": 170500
2436
+ },
2437
+ {
2438
+ "epoch": 3.950560240267991,
2439
+ "grad_norm": 9.52789306640625,
2440
+ "learning_rate": 2.2093549218639806e-06,
2441
+ "loss": 0.1841,
2442
+ "step": 171000
2443
+ },
2444
+ {
2445
+ "epoch": 3.962111585999769,
2446
+ "grad_norm": 0.011110074818134308,
2447
+ "learning_rate": 2.18503621056113e-06,
2448
+ "loss": 0.1625,
2449
+ "step": 171500
2450
+ },
2451
+ {
2452
+ "epoch": 3.9736629317315466,
2453
+ "grad_norm": 82.39501190185547,
2454
+ "learning_rate": 2.1607174992582794e-06,
2455
+ "loss": 0.1714,
2456
+ "step": 172000
2457
+ },
2458
+ {
2459
+ "epoch": 3.9852142774633244,
2460
+ "grad_norm": 1.0335499048233032,
2461
+ "learning_rate": 2.136398787955429e-06,
2462
+ "loss": 0.15,
2463
+ "step": 172500
2464
+ },
2465
+ {
2466
+ "epoch": 3.996765623195102,
2467
+ "grad_norm": 2.844334363937378,
2468
+ "learning_rate": 2.112080076652578e-06,
2469
+ "loss": 0.1628,
2470
+ "step": 173000
2471
+ },
2472
+ {
2473
+ "epoch": 4.0,
2474
+ "eval_accuracy": 0.8855723691810096,
2475
+ "eval_f1": 0.895216738242823,
2476
+ "eval_loss": 0.9363193511962891,
2477
+ "eval_matthews_correlation": 0.7805306223945409,
2478
+ "eval_precision": 0.8347009626005997,
2479
+ "eval_recall": 0.9651931937411614,
2480
+ "eval_runtime": 550.7934,
2481
+ "eval_samples_per_second": 78.587,
2482
+ "eval_steps_per_second": 4.913,
2483
+ "step": 173140
2484
+ },
2485
+ {
2486
+ "epoch": 4.00831696892688,
2487
+ "grad_norm": 0.03443636745214462,
2488
+ "learning_rate": 2.0877613653497277e-06,
2489
+ "loss": 0.1259,
2490
+ "step": 173500
2491
+ },
2492
+ {
2493
+ "epoch": 4.019868314658658,
2494
+ "grad_norm": 0.002207839395850897,
2495
+ "learning_rate": 2.063442654046877e-06,
2496
+ "loss": 0.108,
2497
+ "step": 174000
2498
+ },
2499
+ {
2500
+ "epoch": 4.031419660390435,
2501
+ "grad_norm": 0.0005217639845795929,
2502
+ "learning_rate": 2.0391239427440265e-06,
2503
+ "loss": 0.1025,
2504
+ "step": 174500
2505
+ },
2506
+ {
2507
+ "epoch": 4.0429710061222135,
2508
+ "grad_norm": 0.046045999974012375,
2509
+ "learning_rate": 2.0148052314411756e-06,
2510
+ "loss": 0.1007,
2511
+ "step": 175000
2512
+ },
2513
+ {
2514
+ "epoch": 4.054522351853991,
2515
+ "grad_norm": 8.707540512084961,
2516
+ "learning_rate": 1.990486520138325e-06,
2517
+ "loss": 0.1166,
2518
+ "step": 175500
2519
+ },
2520
+ {
2521
+ "epoch": 4.066073697585769,
2522
+ "grad_norm": 7.846928596496582,
2523
+ "learning_rate": 1.9661678088354744e-06,
2524
+ "loss": 0.1113,
2525
+ "step": 176000
2526
+ },
2527
+ {
2528
+ "epoch": 4.077625043317546,
2529
+ "grad_norm": 0.5696422457695007,
2530
+ "learning_rate": 1.941849097532624e-06,
2531
+ "loss": 0.1084,
2532
+ "step": 176500
2533
+ },
2534
+ {
2535
+ "epoch": 4.0891763890493245,
2536
+ "grad_norm": 61.14679718017578,
2537
+ "learning_rate": 1.917530386229773e-06,
2538
+ "loss": 0.1348,
2539
+ "step": 177000
2540
+ },
2541
+ {
2542
+ "epoch": 4.100727734781102,
2543
+ "grad_norm": 0.07799758017063141,
2544
+ "learning_rate": 1.8932116749269225e-06,
2545
+ "loss": 0.1197,
2546
+ "step": 177500
2547
+ },
2548
+ {
2549
+ "epoch": 4.11227908051288,
2550
+ "grad_norm": 0.017283083871006966,
2551
+ "learning_rate": 1.8688929636240719e-06,
2552
+ "loss": 0.0978,
2553
+ "step": 178000
2554
+ },
2555
+ {
2556
+ "epoch": 4.123830426244657,
2557
+ "grad_norm": 1.0960564613342285,
2558
+ "learning_rate": 1.8445742523212212e-06,
2559
+ "loss": 0.1159,
2560
+ "step": 178500
2561
+ },
2562
+ {
2563
+ "epoch": 4.135381771976435,
2564
+ "grad_norm": 0.3672012984752655,
2565
+ "learning_rate": 1.8202555410183706e-06,
2566
+ "loss": 0.1079,
2567
+ "step": 179000
2568
+ },
2569
+ {
2570
+ "epoch": 4.146933117708213,
2571
+ "grad_norm": 0.6404430866241455,
2572
+ "learning_rate": 1.79593682971552e-06,
2573
+ "loss": 0.1181,
2574
+ "step": 179500
2575
+ },
2576
+ {
2577
+ "epoch": 4.158484463439991,
2578
+ "grad_norm": 10.991400718688965,
2579
+ "learning_rate": 1.7716181184126694e-06,
2580
+ "loss": 0.1036,
2581
+ "step": 180000
2582
+ },
2583
+ {
2584
+ "epoch": 4.170035809171768,
2585
+ "grad_norm": 116.5245132446289,
2586
+ "learning_rate": 1.7472994071098187e-06,
2587
+ "loss": 0.1216,
2588
+ "step": 180500
2589
+ },
2590
+ {
2591
+ "epoch": 4.181587154903546,
2592
+ "grad_norm": 8.385322570800781,
2593
+ "learning_rate": 1.7229806958069679e-06,
2594
+ "loss": 0.1096,
2595
+ "step": 181000
2596
+ },
2597
+ {
2598
+ "epoch": 4.193138500635324,
2599
+ "grad_norm": 11.287476539611816,
2600
+ "learning_rate": 1.6986619845041173e-06,
2601
+ "loss": 0.1426,
2602
+ "step": 181500
2603
+ },
2604
+ {
2605
+ "epoch": 4.204689846367102,
2606
+ "grad_norm": 0.3966763913631439,
2607
+ "learning_rate": 1.6743432732012666e-06,
2608
+ "loss": 0.1071,
2609
+ "step": 182000
2610
+ },
2611
+ {
2612
+ "epoch": 4.216241192098879,
2613
+ "grad_norm": 0.48013606667518616,
2614
+ "learning_rate": 1.650024561898416e-06,
2615
+ "loss": 0.1127,
2616
+ "step": 182500
2617
+ },
2618
+ {
2619
+ "epoch": 4.227792537830657,
2620
+ "grad_norm": 0.44732511043548584,
2621
+ "learning_rate": 1.6257058505955654e-06,
2622
+ "loss": 0.1217,
2623
+ "step": 183000
2624
+ },
2625
+ {
2626
+ "epoch": 4.239343883562435,
2627
+ "grad_norm": 9.097972869873047,
2628
+ "learning_rate": 1.6013871392927148e-06,
2629
+ "loss": 0.0979,
2630
+ "step": 183500
2631
+ },
2632
+ {
2633
+ "epoch": 4.250895229294213,
2634
+ "grad_norm": 0.002253670943900943,
2635
+ "learning_rate": 1.5770684279898641e-06,
2636
+ "loss": 0.1235,
2637
+ "step": 184000
2638
+ },
2639
+ {
2640
+ "epoch": 4.26244657502599,
2641
+ "grad_norm": 0.005998513661324978,
2642
+ "learning_rate": 1.5527497166870135e-06,
2643
+ "loss": 0.1151,
2644
+ "step": 184500
2645
+ },
2646
+ {
2647
+ "epoch": 4.273997920757768,
2648
+ "grad_norm": 0.0003486165660433471,
2649
+ "learning_rate": 1.5284310053841629e-06,
2650
+ "loss": 0.1148,
2651
+ "step": 185000
2652
+ },
2653
+ {
2654
+ "epoch": 4.2855492664895465,
2655
+ "grad_norm": 0.5390808582305908,
2656
+ "learning_rate": 1.5041122940813122e-06,
2657
+ "loss": 0.1142,
2658
+ "step": 185500
2659
+ },
2660
+ {
2661
+ "epoch": 4.297100612221324,
2662
+ "grad_norm": 0.0038761922623962164,
2663
+ "learning_rate": 1.4797935827784616e-06,
2664
+ "loss": 0.1213,
2665
+ "step": 186000
2666
+ },
2667
+ {
2668
+ "epoch": 4.308651957953101,
2669
+ "grad_norm": 0.47574523091316223,
2670
+ "learning_rate": 1.455474871475611e-06,
2671
+ "loss": 0.1151,
2672
+ "step": 186500
2673
+ },
2674
+ {
2675
+ "epoch": 4.320203303684879,
2676
+ "grad_norm": 0.0015592218842357397,
2677
+ "learning_rate": 1.4311561601727604e-06,
2678
+ "loss": 0.0933,
2679
+ "step": 187000
2680
+ },
2681
+ {
2682
+ "epoch": 4.331754649416657,
2683
+ "grad_norm": 0.01165091060101986,
2684
+ "learning_rate": 1.4068374488699097e-06,
2685
+ "loss": 0.129,
2686
+ "step": 187500
2687
+ },
2688
+ {
2689
+ "epoch": 4.343305995148435,
2690
+ "grad_norm": 0.5419438481330872,
2691
+ "learning_rate": 1.3825187375670591e-06,
2692
+ "loss": 0.1179,
2693
+ "step": 188000
2694
+ },
2695
+ {
2696
+ "epoch": 4.354857340880213,
2697
+ "grad_norm": 0.0008570736390538514,
2698
+ "learning_rate": 1.3582000262642083e-06,
2699
+ "loss": 0.1194,
2700
+ "step": 188500
2701
+ },
2702
+ {
2703
+ "epoch": 4.36640868661199,
2704
+ "grad_norm": 0.5189793705940247,
2705
+ "learning_rate": 1.3338813149613577e-06,
2706
+ "loss": 0.1217,
2707
+ "step": 189000
2708
+ },
2709
+ {
2710
+ "epoch": 4.377960032343768,
2711
+ "grad_norm": 0.013138922862708569,
2712
+ "learning_rate": 1.309562603658507e-06,
2713
+ "loss": 0.108,
2714
+ "step": 189500
2715
+ },
2716
+ {
2717
+ "epoch": 4.389511378075546,
2718
+ "grad_norm": 8.491416931152344,
2719
+ "learning_rate": 1.2852438923556564e-06,
2720
+ "loss": 0.1208,
2721
+ "step": 190000
2722
+ },
2723
+ {
2724
+ "epoch": 4.401062723807324,
2725
+ "grad_norm": 44.72172927856445,
2726
+ "learning_rate": 1.2609251810528058e-06,
2727
+ "loss": 0.114,
2728
+ "step": 190500
2729
+ },
2730
+ {
2731
+ "epoch": 4.412614069539101,
2732
+ "grad_norm": 0.020487921312451363,
2733
+ "learning_rate": 1.2366064697499551e-06,
2734
+ "loss": 0.1193,
2735
+ "step": 191000
2736
+ },
2737
+ {
2738
+ "epoch": 4.424165415270879,
2739
+ "grad_norm": 0.00253494456410408,
2740
+ "learning_rate": 1.2122877584471045e-06,
2741
+ "loss": 0.1214,
2742
+ "step": 191500
2743
+ },
2744
+ {
2745
+ "epoch": 4.435716761002657,
2746
+ "grad_norm": 0.0032089604064822197,
2747
+ "learning_rate": 1.1879690471442539e-06,
2748
+ "loss": 0.1047,
2749
+ "step": 192000
2750
+ },
2751
+ {
2752
+ "epoch": 4.447268106734435,
2753
+ "grad_norm": 0.012809256091713905,
2754
+ "learning_rate": 1.1636503358414033e-06,
2755
+ "loss": 0.1254,
2756
+ "step": 192500
2757
+ },
2758
+ {
2759
+ "epoch": 4.458819452466212,
2760
+ "grad_norm": 9.868932723999023,
2761
+ "learning_rate": 1.1393316245385526e-06,
2762
+ "loss": 0.1193,
2763
+ "step": 193000
2764
+ },
2765
+ {
2766
+ "epoch": 4.47037079819799,
2767
+ "grad_norm": 0.01743059977889061,
2768
+ "learning_rate": 1.115012913235702e-06,
2769
+ "loss": 0.1092,
2770
+ "step": 193500
2771
+ },
2772
+ {
2773
+ "epoch": 4.481922143929768,
2774
+ "grad_norm": 0.0163623858243227,
2775
+ "learning_rate": 1.0906942019328514e-06,
2776
+ "loss": 0.1356,
2777
+ "step": 194000
2778
+ },
2779
+ {
2780
+ "epoch": 4.493473489661546,
2781
+ "grad_norm": 0.24585314095020294,
2782
+ "learning_rate": 1.0663754906300005e-06,
2783
+ "loss": 0.1024,
2784
+ "step": 194500
2785
+ },
2786
+ {
2787
+ "epoch": 4.505024835393323,
2788
+ "grad_norm": 23.33641815185547,
2789
+ "learning_rate": 1.04205677932715e-06,
2790
+ "loss": 0.127,
2791
+ "step": 195000
2792
+ },
2793
+ {
2794
+ "epoch": 4.516576181125101,
2795
+ "grad_norm": 0.9842327833175659,
2796
+ "learning_rate": 1.0177380680242993e-06,
2797
+ "loss": 0.1275,
2798
+ "step": 195500
2799
+ },
2800
+ {
2801
+ "epoch": 4.528127526856879,
2802
+ "grad_norm": 134.488037109375,
2803
+ "learning_rate": 9.934193567214487e-07,
2804
+ "loss": 0.112,
2805
+ "step": 196000
2806
+ },
2807
+ {
2808
+ "epoch": 4.539678872588657,
2809
+ "grad_norm": 0.3840857446193695,
2810
+ "learning_rate": 9.69100645418598e-07,
2811
+ "loss": 0.1367,
2812
+ "step": 196500
2813
+ },
2814
+ {
2815
+ "epoch": 4.551230218320434,
2816
+ "grad_norm": 0.4089510440826416,
2817
+ "learning_rate": 9.447819341157474e-07,
2818
+ "loss": 0.1244,
2819
+ "step": 197000
2820
+ },
2821
+ {
2822
+ "epoch": 4.562781564052212,
2823
+ "grad_norm": 0.38987481594085693,
2824
+ "learning_rate": 9.204632228128968e-07,
2825
+ "loss": 0.1169,
2826
+ "step": 197500
2827
+ },
2828
+ {
2829
+ "epoch": 4.5743329097839895,
2830
+ "grad_norm": 0.0034936037845909595,
2831
+ "learning_rate": 8.961445115100462e-07,
2832
+ "loss": 0.1086,
2833
+ "step": 198000
2834
+ },
2835
+ {
2836
+ "epoch": 4.585884255515768,
2837
+ "grad_norm": 1.0578707456588745,
2838
+ "learning_rate": 8.718258002071955e-07,
2839
+ "loss": 0.1375,
2840
+ "step": 198500
2841
+ },
2842
+ {
2843
+ "epoch": 4.597435601247545,
2844
+ "grad_norm": 0.37568581104278564,
2845
+ "learning_rate": 8.475070889043449e-07,
2846
+ "loss": 0.1121,
2847
+ "step": 199000
2848
+ },
2849
+ {
2850
+ "epoch": 4.608986946979323,
2851
+ "grad_norm": 8.019853591918945,
2852
+ "learning_rate": 8.231883776014942e-07,
2853
+ "loss": 0.114,
2854
+ "step": 199500
2855
+ },
2856
+ {
2857
+ "epoch": 4.6205382927111005,
2858
+ "grad_norm": 0.37971556186676025,
2859
+ "learning_rate": 7.988696662986436e-07,
2860
+ "loss": 0.1159,
2861
+ "step": 200000
2862
+ },
2863
+ {
2864
+ "epoch": 4.632089638442879,
2865
+ "grad_norm": 0.3687286078929901,
2866
+ "learning_rate": 7.745509549957929e-07,
2867
+ "loss": 0.1232,
2868
+ "step": 200500
2869
+ },
2870
+ {
2871
+ "epoch": 4.643640984174656,
2872
+ "grad_norm": 0.704964280128479,
2873
+ "learning_rate": 7.502322436929423e-07,
2874
+ "loss": 0.1206,
2875
+ "step": 201000
2876
+ },
2877
+ {
2878
+ "epoch": 4.655192329906434,
2879
+ "grad_norm": 0.5293818116188049,
2880
+ "learning_rate": 7.259135323900917e-07,
2881
+ "loss": 0.1098,
2882
+ "step": 201500
2883
+ },
2884
+ {
2885
+ "epoch": 4.6667436756382115,
2886
+ "grad_norm": 0.0005172818782739341,
2887
+ "learning_rate": 7.01594821087241e-07,
2888
+ "loss": 0.1186,
2889
+ "step": 202000
2890
+ },
2891
+ {
2892
+ "epoch": 4.67829502136999,
2893
+ "grad_norm": 0.004838371649384499,
2894
+ "learning_rate": 6.772761097843904e-07,
2895
+ "loss": 0.1187,
2896
+ "step": 202500
2897
+ },
2898
+ {
2899
+ "epoch": 4.689846367101767,
2900
+ "grad_norm": 0.03154684230685234,
2901
+ "learning_rate": 6.529573984815398e-07,
2902
+ "loss": 0.1172,
2903
+ "step": 203000
2904
+ },
2905
+ {
2906
+ "epoch": 4.701397712833545,
2907
+ "grad_norm": 0.02304168976843357,
2908
+ "learning_rate": 6.286386871786892e-07,
2909
+ "loss": 0.1155,
2910
+ "step": 203500
2911
+ },
2912
+ {
2913
+ "epoch": 4.712949058565323,
2914
+ "grad_norm": 14.163298606872559,
2915
+ "learning_rate": 6.043199758758384e-07,
2916
+ "loss": 0.1396,
2917
+ "step": 204000
2918
+ },
2919
+ {
2920
+ "epoch": 4.724500404297101,
2921
+ "grad_norm": 0.0029035513289272785,
2922
+ "learning_rate": 5.800012645729878e-07,
2923
+ "loss": 0.1109,
2924
+ "step": 204500
2925
+ },
2926
+ {
2927
+ "epoch": 4.736051750028878,
2928
+ "grad_norm": 0.3000687062740326,
2929
+ "learning_rate": 5.556825532701372e-07,
2930
+ "loss": 0.1184,
2931
+ "step": 205000
2932
+ },
2933
+ {
2934
+ "epoch": 4.747603095760656,
2935
+ "grad_norm": 9.624520301818848,
2936
+ "learning_rate": 5.313638419672866e-07,
2937
+ "loss": 0.1196,
2938
+ "step": 205500
2939
+ },
2940
+ {
2941
+ "epoch": 4.759154441492434,
2942
+ "grad_norm": 0.5075832605361938,
2943
+ "learning_rate": 5.070451306644359e-07,
2944
+ "loss": 0.1033,
2945
+ "step": 206000
2946
+ },
2947
+ {
2948
+ "epoch": 4.770705787224212,
2949
+ "grad_norm": 0.008784663863480091,
2950
+ "learning_rate": 4.827264193615852e-07,
2951
+ "loss": 0.1281,
2952
+ "step": 206500
2953
+ },
2954
+ {
2955
+ "epoch": 4.78225713295599,
2956
+ "grad_norm": 1.7812443971633911,
2957
+ "learning_rate": 4.5840770805873457e-07,
2958
+ "loss": 0.121,
2959
+ "step": 207000
2960
+ },
2961
+ {
2962
+ "epoch": 4.793808478687767,
2963
+ "grad_norm": 0.4291926622390747,
2964
+ "learning_rate": 4.3408899675588395e-07,
2965
+ "loss": 0.1297,
2966
+ "step": 207500
2967
+ },
2968
+ {
2969
+ "epoch": 4.805359824419545,
2970
+ "grad_norm": 0.002980246674269438,
2971
+ "learning_rate": 4.097702854530333e-07,
2972
+ "loss": 0.1237,
2973
+ "step": 208000
2974
+ },
2975
+ {
2976
+ "epoch": 4.8169111701513225,
2977
+ "grad_norm": 10.580425262451172,
2978
+ "learning_rate": 3.854515741501827e-07,
2979
+ "loss": 0.103,
2980
+ "step": 208500
2981
+ },
2982
+ {
2983
+ "epoch": 4.828462515883101,
2984
+ "grad_norm": 0.0029076312202960253,
2985
+ "learning_rate": 3.61132862847332e-07,
2986
+ "loss": 0.1036,
2987
+ "step": 209000
2988
+ },
2989
+ {
2990
+ "epoch": 4.840013861614878,
2991
+ "grad_norm": 25.58489418029785,
2992
+ "learning_rate": 3.368141515444814e-07,
2993
+ "loss": 0.129,
2994
+ "step": 209500
2995
+ },
2996
+ {
2997
+ "epoch": 4.851565207346656,
2998
+ "grad_norm": 0.0011458920780569315,
2999
+ "learning_rate": 3.1249544024163076e-07,
3000
+ "loss": 0.1046,
3001
+ "step": 210000
3002
+ },
3003
+ {
3004
+ "epoch": 4.8631165530784335,
3005
+ "grad_norm": 0.20516595244407654,
3006
+ "learning_rate": 2.8817672893878013e-07,
3007
+ "loss": 0.112,
3008
+ "step": 210500
3009
+ },
3010
+ {
3011
+ "epoch": 4.874667898810212,
3012
+ "grad_norm": 0.42036932706832886,
3013
+ "learning_rate": 2.6385801763592946e-07,
3014
+ "loss": 0.1165,
3015
+ "step": 211000
3016
+ },
3017
+ {
3018
+ "epoch": 4.886219244541989,
3019
+ "grad_norm": 0.7215604782104492,
3020
+ "learning_rate": 2.3953930633307883e-07,
3021
+ "loss": 0.1075,
3022
+ "step": 211500
3023
+ },
3024
+ {
3025
+ "epoch": 4.897770590273767,
3026
+ "grad_norm": 0.4458640217781067,
3027
+ "learning_rate": 2.152205950302282e-07,
3028
+ "loss": 0.1262,
3029
+ "step": 212000
3030
+ },
3031
+ {
3032
+ "epoch": 4.9093219360055445,
3033
+ "grad_norm": 0.16922150552272797,
3034
+ "learning_rate": 1.9090188372737752e-07,
3035
+ "loss": 0.1084,
3036
+ "step": 212500
3037
+ },
3038
+ {
3039
+ "epoch": 4.920873281737323,
3040
+ "grad_norm": 0.0319858156144619,
3041
+ "learning_rate": 1.665831724245269e-07,
3042
+ "loss": 0.1199,
3043
+ "step": 213000
3044
+ },
3045
+ {
3046
+ "epoch": 4.9324246274691,
3047
+ "grad_norm": 0.3711884319782257,
3048
+ "learning_rate": 1.4226446112167624e-07,
3049
+ "loss": 0.118,
3050
+ "step": 213500
3051
+ },
3052
+ {
3053
+ "epoch": 4.943975973200878,
3054
+ "grad_norm": 0.01757540926337242,
3055
+ "learning_rate": 1.179457498188256e-07,
3056
+ "loss": 0.1081,
3057
+ "step": 214000
3058
+ },
3059
+ {
3060
+ "epoch": 4.955527318932655,
3061
+ "grad_norm": 0.14560027420520782,
3062
+ "learning_rate": 9.362703851597496e-08,
3063
+ "loss": 0.0969,
3064
+ "step": 214500
3065
+ },
3066
+ {
3067
+ "epoch": 4.967078664664434,
3068
+ "grad_norm": 0.033302776515483856,
3069
+ "learning_rate": 6.930832721312432e-08,
3070
+ "loss": 0.1236,
3071
+ "step": 215000
3072
+ },
3073
+ {
3074
+ "epoch": 4.978630010396211,
3075
+ "grad_norm": 0.8961186408996582,
3076
+ "learning_rate": 4.498961591027369e-08,
3077
+ "loss": 0.1151,
3078
+ "step": 215500
3079
+ },
3080
+ {
3081
+ "epoch": 4.990181356127989,
3082
+ "grad_norm": 0.00015438823902513832,
3083
+ "learning_rate": 2.0670904607423046e-08,
3084
+ "loss": 0.0969,
3085
+ "step": 216000
3086
+ },
3087
+ {
3088
+ "epoch": 5.0,
3089
+ "eval_accuracy": 0.8958761695737554,
3090
+ "eval_f1": 0.9035956450129409,
3091
+ "eval_loss": 1.049615502357483,
3092
+ "eval_matthews_correlation": 0.7986316857596628,
3093
+ "eval_precision": 0.8506645187273459,
3094
+ "eval_recall": 0.9635509328953971,
3095
+ "eval_runtime": 542.2961,
3096
+ "eval_samples_per_second": 79.818,
3097
+ "eval_steps_per_second": 4.99,
3098
+ "step": 216425
3099
+ }
3100
+ ],
3101
+ "logging_steps": 500,
3102
+ "max_steps": 216425,
3103
+ "num_input_tokens_seen": 0,
3104
+ "num_train_epochs": 5,
3105
+ "save_steps": 200,
3106
+ "stateful_callbacks": {
3107
+ "TrainerControl": {
3108
+ "args": {
3109
+ "should_epoch_stop": false,
3110
+ "should_evaluate": false,
3111
+ "should_log": false,
3112
+ "should_save": true,
3113
+ "should_training_stop": true
3114
+ },
3115
+ "attributes": {}
3116
+ }
3117
+ },
3118
+ "total_flos": 4.8146899081214976e+17,
3119
+ "train_batch_size": 8,
3120
+ "trial_name": null,
3121
+ "trial_params": null
3122
+ }
checkpoint-216425/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e57c2bf20bb8ccd683529a85ac381c4cb1e75777081620776ff48030aab98505
3
+ size 5432
checkpoint-43285/config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/plant-dnamamba-BPE",
3
+ "architectures": [
4
+ "MambaForSequenceClassification"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_mamba.MambaConfig",
8
+ "AutoModel": "modeling_mamba.MambaModel",
9
+ "AutoModelForCausalLM": "modeling_mamba.MambaForCausalLM",
10
+ "AutoModelForSequenceClassification": "modeling_mamba.MambaForSequenceClassification"
11
+ },
12
+ "bos_token_id": 0,
13
+ "conv_kernel": 4,
14
+ "d_inner": 1536,
15
+ "d_model": 768,
16
+ "eos_token_id": 0,
17
+ "expand": 2,
18
+ "fused_add_norm": true,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 768,
21
+ "initializer_range": 0.1,
22
+ "intermediate_size": 1536,
23
+ "layer_norm_epsilon": 1e-05,
24
+ "model_type": "mamba",
25
+ "n_layer": 24,
26
+ "num_hidden_layers": 24,
27
+ "pad_token_id": 0,
28
+ "pad_vocab_size_multiple": 8,
29
+ "problem_type": "single_label_classification",
30
+ "rescale_prenorm_residual": false,
31
+ "residual_in_fp32": true,
32
+ "rms_norm": true,
33
+ "ssm_cfg": {},
34
+ "state_size": 16,
35
+ "time_step_floor": 0.0001,
36
+ "time_step_init_scheme": "random",
37
+ "time_step_max": 0.1,
38
+ "time_step_min": 0.001,
39
+ "time_step_rank": 48,
40
+ "time_step_scale": 1.0,
41
+ "torch_dtype": "float32",
42
+ "transformers_version": "4.45.2",
43
+ "use_bias": false,
44
+ "use_cache": false,
45
+ "use_conv_bias": true,
46
+ "use_mambapy": true,
47
+ "vocab_size": 8000
48
+ }
checkpoint-43285/configuration_mamba.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MAMBA configuration"""
16
+
17
+ import math
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+
26
+ class MambaConfig(PretrainedConfig):
27
+ """
28
+ This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA
29
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
30
+ defaults will yield a similar configuration to that of the MAMBA
31
+ [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture.
32
+
33
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
34
+ documentation from [`PretrainedConfig`] for more information.
35
+
36
+
37
+ Args:
38
+ vocab_size (`int`, *optional*, defaults to 50280):
39
+ Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the
40
+ `inputs_ids` passed when calling [`MambaModel`].
41
+ hidden_size (`int`, *optional*, defaults to 768):
42
+ Dimensionality of the embeddings and hidden states.
43
+ state_size (`int`, *optional*, defaults to 16): shape of the state space latents.
44
+ num_hidden_layers (`int`, *optional*, defaults to 32):
45
+ Number of hidden layers in the model.
46
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
47
+ The epsilon to use in the layer normalization layers.
48
+ pad_token_id (`int`, *optional*, defaults to 0):
49
+ Padding token id.
50
+ bos_token_id (`int`, *optional*, defaults to 0):
51
+ The id of the beginning of sentence token in the vocabulary.
52
+ eos_token_id (`int`, *optional*, defaults to 0):
53
+ The id of the end of sentence token in the vocabulary.
54
+ expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
55
+ conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
56
+ use_bias (`bool`, *optional*, defaults to `False`):
57
+ Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
58
+ use_conv_bias (`bool`, *optional*, defaults to `True`):
59
+ Whether or not to use bias in the convolution layer of the mixer block.
60
+ hidden_act (`str`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ initializer_range (`float`, *optional*, defaults to 0.1):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ residual_in_fp32 (`bool`, *optional*, defaults to `True`):
65
+ Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
66
+ time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
67
+ Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
68
+ time_step_scale (`float`, *optional*, defaults to 1.0):
69
+ Scale used used to scale `dt_proj.bias`.
70
+ time_step_min (`float`, *optional*, defaults to 0.001):
71
+ Minimum `time_step` used to bound `dt_proj.bias`.
72
+ time_step_max (`float`, *optional*, defaults to 0.1):
73
+ Maximum `time_step` used to bound `dt_proj.bias`.
74
+ time_step_init_scheme (`float`, *optional*, defaults to `"random"`):
75
+ Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`
76
+ time_step_floor (`float`, *optional*, defaults to 0.0001):
77
+ Minimum clamping value of the `dt_proj.bias` layer initialization.
78
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `False`):
79
+ Whether or not to rescale `out_proj` weights when initializing.
80
+ use_cache (`bool`, *optional*, defaults to `True`):
81
+ Whether or not the cache should be used.
82
+ use_mambapy (`bool`, *optional*, defaults to `False`):
83
+ Determines the fallback strategy during training if the CUDA-based official implementation of Mamba is not avaiable. If `True`, the mamba.py implementation is used. If `False`, the naive and slower implementation is used. Consider switching to the naive version if memory is limited.
84
+
85
+
86
+ Example:
87
+
88
+ ```python
89
+ >>> from transformers import MambaConfig, MambaModel
90
+
91
+ >>> # Initializing a Mamba configuration
92
+ >>> configuration = MambaConfig()
93
+
94
+ >>> # Initializing a model (with random weights) from the configuration
95
+ >>> model = MambaModel(configuration)
96
+
97
+ >>> # Accessing the model configuration
98
+ >>> configuration = model.config
99
+ ```"""
100
+
101
+ model_type = "mamba"
102
+
103
+ def __init__(
104
+ self,
105
+ vocab_size=50280,
106
+ hidden_size=768,
107
+ state_size=16,
108
+ num_hidden_layers=32,
109
+ layer_norm_epsilon=1e-5,
110
+ pad_token_id=0,
111
+ bos_token_id=0,
112
+ eos_token_id=0,
113
+ expand=2,
114
+ conv_kernel=4,
115
+ use_bias=False,
116
+ use_conv_bias=True,
117
+ hidden_act="silu",
118
+ initializer_range=0.1,
119
+ residual_in_fp32=True,
120
+ time_step_rank="auto",
121
+ time_step_scale=1.0,
122
+ time_step_min=0.001,
123
+ time_step_max=0.1,
124
+ time_step_init_scheme="random",
125
+ time_step_floor=1e-4,
126
+ rescale_prenorm_residual=False,
127
+ use_cache=True,
128
+ use_mambapy=False,
129
+ **kwargs,
130
+ ):
131
+ self.vocab_size = vocab_size
132
+ self.hidden_size = hidden_size
133
+ self.state_size = state_size
134
+ self.num_hidden_layers = num_hidden_layers
135
+ self.layer_norm_epsilon = layer_norm_epsilon
136
+ self.conv_kernel = conv_kernel
137
+ self.expand = expand
138
+ self.intermediate_size = int(expand * self.hidden_size)
139
+ self.bos_token_id = bos_token_id
140
+ self.eos_token_id = eos_token_id
141
+ self.pad_token_id = pad_token_id
142
+ self.use_bias = use_bias
143
+ self.use_conv_bias = use_conv_bias
144
+ self.hidden_act = hidden_act
145
+ self.initializer_range = initializer_range
146
+ self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
147
+ self.time_step_scale = time_step_scale
148
+ self.time_step_min = time_step_min
149
+ self.time_step_max = time_step_max
150
+ self.time_step_init_scheme = time_step_init_scheme
151
+ self.time_step_floor = time_step_floor
152
+ self.rescale_prenorm_residual = rescale_prenorm_residual
153
+ self.residual_in_fp32 = residual_in_fp32
154
+ self.use_cache = use_cache
155
+ self.use_mambapy = use_mambapy
156
+
157
+ super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, pad_token_id=pad_token_id, **kwargs)
checkpoint-43285/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3acf29af545a714f05a70937f630429464e514b8403bcf91e411c935ef2f3e60
3
+ size 386689624
checkpoint-43285/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0764b1fe9f895fea22e5dbba96b4d70126d41d03b9d24f0bb207f987ebbadbfd
3
+ size 773531386
checkpoint-43285/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5074130e5ff983016a18b27d012c95161c0e42ff3b6be3b0d0fc12006d9a4354
3
+ size 14244
checkpoint-43285/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b23cb750cce38d77102f0ef5d5140ac160c199a6355f744cf82323f68f1e8ebf
3
+ size 1064
checkpoint-43285/special_tokens_map.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|endoftext|>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|endoftext|>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "unk_token": {
24
+ "content": "<|endoftext|>",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ }
30
+ }
checkpoint-43285/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
checkpoint-43285/tokenizer_config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "add_prefix_space": false,
5
+ "added_tokens_decoder": {
6
+ "0": {
7
+ "content": "<|endoftext|>",
8
+ "lstrip": false,
9
+ "normalized": false,
10
+ "rstrip": false,
11
+ "single_word": false,
12
+ "special": true
13
+ },
14
+ "1": {
15
+ "content": "<|padding|>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false,
20
+ "special": true
21
+ }
22
+ },
23
+ "bos_token": "<|endoftext|>",
24
+ "clean_up_tokenization_spaces": true,
25
+ "eos_token": "<|endoftext|>",
26
+ "max_length": 1024,
27
+ "model_max_length": 512,
28
+ "pad_to_multiple_of": null,
29
+ "pad_token": "<|endoftext|>",
30
+ "pad_token_type_id": 0,
31
+ "padding_side": "right",
32
+ "stride": 0,
33
+ "tokenizer_class": "GPTNeoXTokenizer",
34
+ "truncation_side": "right",
35
+ "truncation_strategy": "longest_first",
36
+ "unk_token": "<|endoftext|>"
37
+ }
checkpoint-43285/trainer_state.json ADDED
@@ -0,0 +1,648 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": 0.7890500619140279,
3
+ "best_model_checkpoint": "finetune/models/plant-dnamamba-BPE-NoduleExpDup/checkpoint-43285",
4
+ "epoch": 1.0,
5
+ "eval_steps": 500,
6
+ "global_step": 43285,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.011551345731777752,
13
+ "grad_norm": 29.45172119140625,
14
+ "learning_rate": 4.620218074293107e-07,
15
+ "loss": 1.0687,
16
+ "step": 500
17
+ },
18
+ {
19
+ "epoch": 0.023102691463555505,
20
+ "grad_norm": 31.642974853515625,
21
+ "learning_rate": 9.240436148586214e-07,
22
+ "loss": 0.9807,
23
+ "step": 1000
24
+ },
25
+ {
26
+ "epoch": 0.034654037195333255,
27
+ "grad_norm": 12.195789337158203,
28
+ "learning_rate": 1.3860654222879322e-06,
29
+ "loss": 0.8373,
30
+ "step": 1500
31
+ },
32
+ {
33
+ "epoch": 0.04620538292711101,
34
+ "grad_norm": 13.536879539489746,
35
+ "learning_rate": 1.8480872297172428e-06,
36
+ "loss": 0.7818,
37
+ "step": 2000
38
+ },
39
+ {
40
+ "epoch": 0.05775672865888876,
41
+ "grad_norm": 13.85306453704834,
42
+ "learning_rate": 2.3101090371465534e-06,
43
+ "loss": 0.7234,
44
+ "step": 2500
45
+ },
46
+ {
47
+ "epoch": 0.06930807439066651,
48
+ "grad_norm": 11.797886848449707,
49
+ "learning_rate": 2.7721308445758644e-06,
50
+ "loss": 0.6925,
51
+ "step": 3000
52
+ },
53
+ {
54
+ "epoch": 0.08085942012244426,
55
+ "grad_norm": 6.770650386810303,
56
+ "learning_rate": 3.2341526520051746e-06,
57
+ "loss": 0.6751,
58
+ "step": 3500
59
+ },
60
+ {
61
+ "epoch": 0.09241076585422202,
62
+ "grad_norm": 10.289820671081543,
63
+ "learning_rate": 3.6961744594344856e-06,
64
+ "loss": 0.6636,
65
+ "step": 4000
66
+ },
67
+ {
68
+ "epoch": 0.10396211158599977,
69
+ "grad_norm": 15.198243141174316,
70
+ "learning_rate": 4.158196266863797e-06,
71
+ "loss": 0.6563,
72
+ "step": 4500
73
+ },
74
+ {
75
+ "epoch": 0.11551345731777753,
76
+ "grad_norm": 13.971009254455566,
77
+ "learning_rate": 4.620218074293107e-06,
78
+ "loss": 0.6481,
79
+ "step": 5000
80
+ },
81
+ {
82
+ "epoch": 0.12706480304955528,
83
+ "grad_norm": 8.441276550292969,
84
+ "learning_rate": 5.082239881722417e-06,
85
+ "loss": 0.6549,
86
+ "step": 5500
87
+ },
88
+ {
89
+ "epoch": 0.13861614878133302,
90
+ "grad_norm": 14.8948335647583,
91
+ "learning_rate": 5.544261689151729e-06,
92
+ "loss": 0.6507,
93
+ "step": 6000
94
+ },
95
+ {
96
+ "epoch": 0.1501674945131108,
97
+ "grad_norm": 7.725317478179932,
98
+ "learning_rate": 6.006283496581039e-06,
99
+ "loss": 0.6544,
100
+ "step": 6500
101
+ },
102
+ {
103
+ "epoch": 0.16171884024488853,
104
+ "grad_norm": 17.12333106994629,
105
+ "learning_rate": 6.468305304010349e-06,
106
+ "loss": 0.6476,
107
+ "step": 7000
108
+ },
109
+ {
110
+ "epoch": 0.17327018597666627,
111
+ "grad_norm": 7.033656120300293,
112
+ "learning_rate": 6.930327111439661e-06,
113
+ "loss": 0.6496,
114
+ "step": 7500
115
+ },
116
+ {
117
+ "epoch": 0.18482153170844404,
118
+ "grad_norm": 7.053399085998535,
119
+ "learning_rate": 7.392348918868971e-06,
120
+ "loss": 0.6523,
121
+ "step": 8000
122
+ },
123
+ {
124
+ "epoch": 0.19637287744022178,
125
+ "grad_norm": 6.655728340148926,
126
+ "learning_rate": 7.854370726298282e-06,
127
+ "loss": 0.6455,
128
+ "step": 8500
129
+ },
130
+ {
131
+ "epoch": 0.20792422317199954,
132
+ "grad_norm": 8.08369255065918,
133
+ "learning_rate": 8.316392533727593e-06,
134
+ "loss": 0.6415,
135
+ "step": 9000
136
+ },
137
+ {
138
+ "epoch": 0.21947556890377729,
139
+ "grad_norm": 3.947148084640503,
140
+ "learning_rate": 8.778414341156903e-06,
141
+ "loss": 0.6514,
142
+ "step": 9500
143
+ },
144
+ {
145
+ "epoch": 0.23102691463555505,
146
+ "grad_norm": 8.264023780822754,
147
+ "learning_rate": 9.240436148586214e-06,
148
+ "loss": 0.6431,
149
+ "step": 10000
150
+ },
151
+ {
152
+ "epoch": 0.2425782603673328,
153
+ "grad_norm": 4.74052095413208,
154
+ "learning_rate": 9.702457956015525e-06,
155
+ "loss": 0.6331,
156
+ "step": 10500
157
+ },
158
+ {
159
+ "epoch": 0.25412960609911056,
160
+ "grad_norm": 12.625064849853516,
161
+ "learning_rate": 9.991342538776186e-06,
162
+ "loss": 0.6449,
163
+ "step": 11000
164
+ },
165
+ {
166
+ "epoch": 0.2656809518308883,
167
+ "grad_norm": 17.814157485961914,
168
+ "learning_rate": 9.967023827473336e-06,
169
+ "loss": 0.6414,
170
+ "step": 11500
171
+ },
172
+ {
173
+ "epoch": 0.27723229756266604,
174
+ "grad_norm": 15.34868049621582,
175
+ "learning_rate": 9.942705116170485e-06,
176
+ "loss": 0.6287,
177
+ "step": 12000
178
+ },
179
+ {
180
+ "epoch": 0.2887836432944438,
181
+ "grad_norm": 5.785658836364746,
182
+ "learning_rate": 9.918386404867634e-06,
183
+ "loss": 0.6417,
184
+ "step": 12500
185
+ },
186
+ {
187
+ "epoch": 0.3003349890262216,
188
+ "grad_norm": 3.7156598567962646,
189
+ "learning_rate": 9.894067693564784e-06,
190
+ "loss": 0.6436,
191
+ "step": 13000
192
+ },
193
+ {
194
+ "epoch": 0.3118863347579993,
195
+ "grad_norm": 12.272802352905273,
196
+ "learning_rate": 9.869748982261933e-06,
197
+ "loss": 0.6505,
198
+ "step": 13500
199
+ },
200
+ {
201
+ "epoch": 0.32343768048977706,
202
+ "grad_norm": 15.592353820800781,
203
+ "learning_rate": 9.845430270959082e-06,
204
+ "loss": 0.6313,
205
+ "step": 14000
206
+ },
207
+ {
208
+ "epoch": 0.3349890262215548,
209
+ "grad_norm": 4.684946537017822,
210
+ "learning_rate": 9.821111559656232e-06,
211
+ "loss": 0.6457,
212
+ "step": 14500
213
+ },
214
+ {
215
+ "epoch": 0.34654037195333254,
216
+ "grad_norm": 6.0165300369262695,
217
+ "learning_rate": 9.79679284835338e-06,
218
+ "loss": 0.641,
219
+ "step": 15000
220
+ },
221
+ {
222
+ "epoch": 0.35809171768511033,
223
+ "grad_norm": 10.924590110778809,
224
+ "learning_rate": 9.772474137050531e-06,
225
+ "loss": 0.6293,
226
+ "step": 15500
227
+ },
228
+ {
229
+ "epoch": 0.3696430634168881,
230
+ "grad_norm": 7.296951770782471,
231
+ "learning_rate": 9.74815542574768e-06,
232
+ "loss": 0.627,
233
+ "step": 16000
234
+ },
235
+ {
236
+ "epoch": 0.3811944091486658,
237
+ "grad_norm": 14.750964164733887,
238
+ "learning_rate": 9.723836714444829e-06,
239
+ "loss": 0.6254,
240
+ "step": 16500
241
+ },
242
+ {
243
+ "epoch": 0.39274575488044355,
244
+ "grad_norm": 7.377384662628174,
245
+ "learning_rate": 9.699518003141977e-06,
246
+ "loss": 0.6287,
247
+ "step": 17000
248
+ },
249
+ {
250
+ "epoch": 0.40429710061222135,
251
+ "grad_norm": 6.540673732757568,
252
+ "learning_rate": 9.675199291839128e-06,
253
+ "loss": 0.6309,
254
+ "step": 17500
255
+ },
256
+ {
257
+ "epoch": 0.4158484463439991,
258
+ "grad_norm": 6.4445295333862305,
259
+ "learning_rate": 9.650880580536277e-06,
260
+ "loss": 0.6302,
261
+ "step": 18000
262
+ },
263
+ {
264
+ "epoch": 0.42739979207577683,
265
+ "grad_norm": 7.85426664352417,
266
+ "learning_rate": 9.626561869233427e-06,
267
+ "loss": 0.6201,
268
+ "step": 18500
269
+ },
270
+ {
271
+ "epoch": 0.43895113780755457,
272
+ "grad_norm": 5.960766792297363,
273
+ "learning_rate": 9.602243157930576e-06,
274
+ "loss": 0.626,
275
+ "step": 19000
276
+ },
277
+ {
278
+ "epoch": 0.4505024835393323,
279
+ "grad_norm": 9.376029968261719,
280
+ "learning_rate": 9.577924446627726e-06,
281
+ "loss": 0.6164,
282
+ "step": 19500
283
+ },
284
+ {
285
+ "epoch": 0.4620538292711101,
286
+ "grad_norm": 17.05916976928711,
287
+ "learning_rate": 9.553605735324875e-06,
288
+ "loss": 0.6232,
289
+ "step": 20000
290
+ },
291
+ {
292
+ "epoch": 0.47360517500288785,
293
+ "grad_norm": 7.523278713226318,
294
+ "learning_rate": 9.529287024022024e-06,
295
+ "loss": 0.601,
296
+ "step": 20500
297
+ },
298
+ {
299
+ "epoch": 0.4851565207346656,
300
+ "grad_norm": 6.75208044052124,
301
+ "learning_rate": 9.504968312719172e-06,
302
+ "loss": 0.6157,
303
+ "step": 21000
304
+ },
305
+ {
306
+ "epoch": 0.4967078664664433,
307
+ "grad_norm": 12.919069290161133,
308
+ "learning_rate": 9.480649601416323e-06,
309
+ "loss": 0.6098,
310
+ "step": 21500
311
+ },
312
+ {
313
+ "epoch": 0.5082592121982211,
314
+ "grad_norm": 19.89571189880371,
315
+ "learning_rate": 9.456330890113472e-06,
316
+ "loss": 0.6131,
317
+ "step": 22000
318
+ },
319
+ {
320
+ "epoch": 0.5198105579299989,
321
+ "grad_norm": 7.436350345611572,
322
+ "learning_rate": 9.432012178810622e-06,
323
+ "loss": 0.6121,
324
+ "step": 22500
325
+ },
326
+ {
327
+ "epoch": 0.5313619036617766,
328
+ "grad_norm": 10.576221466064453,
329
+ "learning_rate": 9.40769346750777e-06,
330
+ "loss": 0.6005,
331
+ "step": 23000
332
+ },
333
+ {
334
+ "epoch": 0.5429132493935543,
335
+ "grad_norm": 12.731452941894531,
336
+ "learning_rate": 9.383374756204921e-06,
337
+ "loss": 0.6217,
338
+ "step": 23500
339
+ },
340
+ {
341
+ "epoch": 0.5544645951253321,
342
+ "grad_norm": 9.219985961914062,
343
+ "learning_rate": 9.35905604490207e-06,
344
+ "loss": 0.5985,
345
+ "step": 24000
346
+ },
347
+ {
348
+ "epoch": 0.5660159408571098,
349
+ "grad_norm": 12.659988403320312,
350
+ "learning_rate": 9.334737333599219e-06,
351
+ "loss": 0.5951,
352
+ "step": 24500
353
+ },
354
+ {
355
+ "epoch": 0.5775672865888876,
356
+ "grad_norm": 32.879234313964844,
357
+ "learning_rate": 9.310418622296367e-06,
358
+ "loss": 0.5961,
359
+ "step": 25000
360
+ },
361
+ {
362
+ "epoch": 0.5891186323206653,
363
+ "grad_norm": 9.763121604919434,
364
+ "learning_rate": 9.286099910993518e-06,
365
+ "loss": 0.6081,
366
+ "step": 25500
367
+ },
368
+ {
369
+ "epoch": 0.6006699780524432,
370
+ "grad_norm": 9.057079315185547,
371
+ "learning_rate": 9.261781199690667e-06,
372
+ "loss": 0.5863,
373
+ "step": 26000
374
+ },
375
+ {
376
+ "epoch": 0.6122213237842209,
377
+ "grad_norm": 17.381582260131836,
378
+ "learning_rate": 9.237462488387817e-06,
379
+ "loss": 0.5857,
380
+ "step": 26500
381
+ },
382
+ {
383
+ "epoch": 0.6237726695159986,
384
+ "grad_norm": 18.3979549407959,
385
+ "learning_rate": 9.213143777084966e-06,
386
+ "loss": 0.5853,
387
+ "step": 27000
388
+ },
389
+ {
390
+ "epoch": 0.6353240152477764,
391
+ "grad_norm": 11.13846206665039,
392
+ "learning_rate": 9.188825065782114e-06,
393
+ "loss": 0.582,
394
+ "step": 27500
395
+ },
396
+ {
397
+ "epoch": 0.6468753609795541,
398
+ "grad_norm": 8.506060600280762,
399
+ "learning_rate": 9.164506354479265e-06,
400
+ "loss": 0.5773,
401
+ "step": 28000
402
+ },
403
+ {
404
+ "epoch": 0.6584267067113319,
405
+ "grad_norm": 9.814123153686523,
406
+ "learning_rate": 9.140187643176414e-06,
407
+ "loss": 0.5797,
408
+ "step": 28500
409
+ },
410
+ {
411
+ "epoch": 0.6699780524431096,
412
+ "grad_norm": 11.70206069946289,
413
+ "learning_rate": 9.115868931873562e-06,
414
+ "loss": 0.5587,
415
+ "step": 29000
416
+ },
417
+ {
418
+ "epoch": 0.6815293981748873,
419
+ "grad_norm": 11.949383735656738,
420
+ "learning_rate": 9.091550220570711e-06,
421
+ "loss": 0.5769,
422
+ "step": 29500
423
+ },
424
+ {
425
+ "epoch": 0.6930807439066651,
426
+ "grad_norm": 11.51278018951416,
427
+ "learning_rate": 9.067231509267862e-06,
428
+ "loss": 0.5798,
429
+ "step": 30000
430
+ },
431
+ {
432
+ "epoch": 0.7046320896384429,
433
+ "grad_norm": 13.705997467041016,
434
+ "learning_rate": 9.042912797965012e-06,
435
+ "loss": 0.5585,
436
+ "step": 30500
437
+ },
438
+ {
439
+ "epoch": 0.7161834353702207,
440
+ "grad_norm": 18.50054168701172,
441
+ "learning_rate": 9.01859408666216e-06,
442
+ "loss": 0.5656,
443
+ "step": 31000
444
+ },
445
+ {
446
+ "epoch": 0.7277347811019984,
447
+ "grad_norm": 23.920692443847656,
448
+ "learning_rate": 8.99427537535931e-06,
449
+ "loss": 0.5493,
450
+ "step": 31500
451
+ },
452
+ {
453
+ "epoch": 0.7392861268337761,
454
+ "grad_norm": 15.425186157226562,
455
+ "learning_rate": 8.96995666405646e-06,
456
+ "loss": 0.5538,
457
+ "step": 32000
458
+ },
459
+ {
460
+ "epoch": 0.7508374725655539,
461
+ "grad_norm": 11.077018737792969,
462
+ "learning_rate": 8.945637952753609e-06,
463
+ "loss": 0.5553,
464
+ "step": 32500
465
+ },
466
+ {
467
+ "epoch": 0.7623888182973316,
468
+ "grad_norm": 17.128128051757812,
469
+ "learning_rate": 8.921319241450757e-06,
470
+ "loss": 0.5657,
471
+ "step": 33000
472
+ },
473
+ {
474
+ "epoch": 0.7739401640291094,
475
+ "grad_norm": 37.22261428833008,
476
+ "learning_rate": 8.897000530147906e-06,
477
+ "loss": 0.5573,
478
+ "step": 33500
479
+ },
480
+ {
481
+ "epoch": 0.7854915097608871,
482
+ "grad_norm": 18.649255752563477,
483
+ "learning_rate": 8.872681818845056e-06,
484
+ "loss": 0.5337,
485
+ "step": 34000
486
+ },
487
+ {
488
+ "epoch": 0.7970428554926648,
489
+ "grad_norm": 11.49511432647705,
490
+ "learning_rate": 8.848363107542205e-06,
491
+ "loss": 0.5442,
492
+ "step": 34500
493
+ },
494
+ {
495
+ "epoch": 0.8085942012244427,
496
+ "grad_norm": 20.64067840576172,
497
+ "learning_rate": 8.824044396239356e-06,
498
+ "loss": 0.5632,
499
+ "step": 35000
500
+ },
501
+ {
502
+ "epoch": 0.8201455469562204,
503
+ "grad_norm": 18.51673126220703,
504
+ "learning_rate": 8.799725684936504e-06,
505
+ "loss": 0.5454,
506
+ "step": 35500
507
+ },
508
+ {
509
+ "epoch": 0.8316968926879982,
510
+ "grad_norm": 18.139997482299805,
511
+ "learning_rate": 8.775406973633655e-06,
512
+ "loss": 0.5371,
513
+ "step": 36000
514
+ },
515
+ {
516
+ "epoch": 0.8432482384197759,
517
+ "grad_norm": 21.852693557739258,
518
+ "learning_rate": 8.751088262330804e-06,
519
+ "loss": 0.539,
520
+ "step": 36500
521
+ },
522
+ {
523
+ "epoch": 0.8547995841515537,
524
+ "grad_norm": 31.914161682128906,
525
+ "learning_rate": 8.726769551027952e-06,
526
+ "loss": 0.5383,
527
+ "step": 37000
528
+ },
529
+ {
530
+ "epoch": 0.8663509298833314,
531
+ "grad_norm": 9.24451732635498,
532
+ "learning_rate": 8.702450839725101e-06,
533
+ "loss": 0.548,
534
+ "step": 37500
535
+ },
536
+ {
537
+ "epoch": 0.8779022756151091,
538
+ "grad_norm": 27.3262882232666,
539
+ "learning_rate": 8.678132128422251e-06,
540
+ "loss": 0.524,
541
+ "step": 38000
542
+ },
543
+ {
544
+ "epoch": 0.8894536213468869,
545
+ "grad_norm": 23.231952667236328,
546
+ "learning_rate": 8.6538134171194e-06,
547
+ "loss": 0.5121,
548
+ "step": 38500
549
+ },
550
+ {
551
+ "epoch": 0.9010049670786646,
552
+ "grad_norm": 28.35572052001953,
553
+ "learning_rate": 8.62949470581655e-06,
554
+ "loss": 0.5333,
555
+ "step": 39000
556
+ },
557
+ {
558
+ "epoch": 0.9125563128104425,
559
+ "grad_norm": 24.61405372619629,
560
+ "learning_rate": 8.6051759945137e-06,
561
+ "loss": 0.5015,
562
+ "step": 39500
563
+ },
564
+ {
565
+ "epoch": 0.9241076585422202,
566
+ "grad_norm": 30.62819480895996,
567
+ "learning_rate": 8.580857283210848e-06,
568
+ "loss": 0.5092,
569
+ "step": 40000
570
+ },
571
+ {
572
+ "epoch": 0.935659004273998,
573
+ "grad_norm": 19.295331954956055,
574
+ "learning_rate": 8.556538571907999e-06,
575
+ "loss": 0.5125,
576
+ "step": 40500
577
+ },
578
+ {
579
+ "epoch": 0.9472103500057757,
580
+ "grad_norm": 10.785584449768066,
581
+ "learning_rate": 8.532219860605147e-06,
582
+ "loss": 0.5161,
583
+ "step": 41000
584
+ },
585
+ {
586
+ "epoch": 0.9587616957375534,
587
+ "grad_norm": 24.81365394592285,
588
+ "learning_rate": 8.507901149302296e-06,
589
+ "loss": 0.498,
590
+ "step": 41500
591
+ },
592
+ {
593
+ "epoch": 0.9703130414693312,
594
+ "grad_norm": 48.026973724365234,
595
+ "learning_rate": 8.483582437999446e-06,
596
+ "loss": 0.4916,
597
+ "step": 42000
598
+ },
599
+ {
600
+ "epoch": 0.9818643872011089,
601
+ "grad_norm": 6.653892517089844,
602
+ "learning_rate": 8.459263726696595e-06,
603
+ "loss": 0.4944,
604
+ "step": 42500
605
+ },
606
+ {
607
+ "epoch": 0.9934157329328867,
608
+ "grad_norm": 39.814945220947266,
609
+ "learning_rate": 8.434945015393746e-06,
610
+ "loss": 0.4954,
611
+ "step": 43000
612
+ },
613
+ {
614
+ "epoch": 1.0,
615
+ "eval_accuracy": 0.7796003234376805,
616
+ "eval_f1": 0.7890500619140279,
617
+ "eval_loss": 0.4737609326839447,
618
+ "eval_matthews_correlation": 0.5599099791389662,
619
+ "eval_precision": 0.7656524910955671,
620
+ "eval_recall": 0.8139227225035354,
621
+ "eval_runtime": 550.4755,
622
+ "eval_samples_per_second": 78.632,
623
+ "eval_steps_per_second": 4.916,
624
+ "step": 43285
625
+ }
626
+ ],
627
+ "logging_steps": 500,
628
+ "max_steps": 216425,
629
+ "num_input_tokens_seen": 0,
630
+ "num_train_epochs": 5,
631
+ "save_steps": 200,
632
+ "stateful_callbacks": {
633
+ "TrainerControl": {
634
+ "args": {
635
+ "should_epoch_stop": false,
636
+ "should_evaluate": false,
637
+ "should_log": false,
638
+ "should_save": true,
639
+ "should_training_stop": false
640
+ },
641
+ "attributes": {}
642
+ }
643
+ },
644
+ "total_flos": 9.629379816242995e+16,
645
+ "train_batch_size": 8,
646
+ "trial_name": null,
647
+ "trial_params": null
648
+ }
checkpoint-43285/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e57c2bf20bb8ccd683529a85ac381c4cb1e75777081620776ff48030aab98505
3
+ size 5432
checkpoint-86570/config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/plant-dnamamba-BPE",
3
+ "architectures": [
4
+ "MambaForSequenceClassification"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration_mamba.MambaConfig",
8
+ "AutoModel": "modeling_mamba.MambaModel",
9
+ "AutoModelForCausalLM": "modeling_mamba.MambaForCausalLM",
10
+ "AutoModelForSequenceClassification": "modeling_mamba.MambaForSequenceClassification"
11
+ },
12
+ "bos_token_id": 0,
13
+ "conv_kernel": 4,
14
+ "d_inner": 1536,
15
+ "d_model": 768,
16
+ "eos_token_id": 0,
17
+ "expand": 2,
18
+ "fused_add_norm": true,
19
+ "hidden_act": "silu",
20
+ "hidden_size": 768,
21
+ "initializer_range": 0.1,
22
+ "intermediate_size": 1536,
23
+ "layer_norm_epsilon": 1e-05,
24
+ "model_type": "mamba",
25
+ "n_layer": 24,
26
+ "num_hidden_layers": 24,
27
+ "pad_token_id": 0,
28
+ "pad_vocab_size_multiple": 8,
29
+ "problem_type": "single_label_classification",
30
+ "rescale_prenorm_residual": false,
31
+ "residual_in_fp32": true,
32
+ "rms_norm": true,
33
+ "ssm_cfg": {},
34
+ "state_size": 16,
35
+ "time_step_floor": 0.0001,
36
+ "time_step_init_scheme": "random",
37
+ "time_step_max": 0.1,
38
+ "time_step_min": 0.001,
39
+ "time_step_rank": 48,
40
+ "time_step_scale": 1.0,
41
+ "torch_dtype": "float32",
42
+ "transformers_version": "4.45.2",
43
+ "use_bias": false,
44
+ "use_cache": false,
45
+ "use_conv_bias": true,
46
+ "use_mambapy": true,
47
+ "vocab_size": 8000
48
+ }
checkpoint-86570/configuration_mamba.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """MAMBA configuration"""
16
+
17
+ import math
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+
26
+ class MambaConfig(PretrainedConfig):
27
+ """
28
+ This is the configuration class to store the configuration of a [`MambaModel`]. It is used to instantiate a MAMBA
29
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
30
+ defaults will yield a similar configuration to that of the MAMBA
31
+ [state-spaces/mamba-2.8b](https://huggingface.co/state-spaces/mamba-2.8b) architecture.
32
+
33
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
34
+ documentation from [`PretrainedConfig`] for more information.
35
+
36
+
37
+ Args:
38
+ vocab_size (`int`, *optional*, defaults to 50280):
39
+ Vocabulary size of the MAMBA model. Defines the number of different tokens that can be represented by the
40
+ `inputs_ids` passed when calling [`MambaModel`].
41
+ hidden_size (`int`, *optional*, defaults to 768):
42
+ Dimensionality of the embeddings and hidden states.
43
+ state_size (`int`, *optional*, defaults to 16): shape of the state space latents.
44
+ num_hidden_layers (`int`, *optional*, defaults to 32):
45
+ Number of hidden layers in the model.
46
+ layer_norm_epsilon (`float`, *optional*, defaults to 1e-05):
47
+ The epsilon to use in the layer normalization layers.
48
+ pad_token_id (`int`, *optional*, defaults to 0):
49
+ Padding token id.
50
+ bos_token_id (`int`, *optional*, defaults to 0):
51
+ The id of the beginning of sentence token in the vocabulary.
52
+ eos_token_id (`int`, *optional*, defaults to 0):
53
+ The id of the end of sentence token in the vocabulary.
54
+ expand (`int`, *optional*, defaults to 2): Expanding factor used to determine the intermediate size.
55
+ conv_kernel (`int`, *optional*, defaults to 4): Size of the convolution kernel.
56
+ use_bias (`bool`, *optional*, defaults to `False`):
57
+ Whether or not to use bias in ["in_proj", "out_proj"] of the mixer block
58
+ use_conv_bias (`bool`, *optional*, defaults to `True`):
59
+ Whether or not to use bias in the convolution layer of the mixer block.
60
+ hidden_act (`str`, *optional*, defaults to `"silu"`):
61
+ The non-linear activation function (function or string) in the decoder.
62
+ initializer_range (`float`, *optional*, defaults to 0.1):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ residual_in_fp32 (`bool`, *optional*, defaults to `True`):
65
+ Whether or not residuals should be in `float32`. If set to `False` residuals will keep the same `dtype` as the rest of the model
66
+ time_step_rank (`Union[int,str]`, *optional*, defaults to `"auto"`):
67
+ Rank of the discretization projection matrix. `"auto"` means that it will default to `math.ceil(self.hidden_size / 16)`
68
+ time_step_scale (`float`, *optional*, defaults to 1.0):
69
+ Scale used used to scale `dt_proj.bias`.
70
+ time_step_min (`float`, *optional*, defaults to 0.001):
71
+ Minimum `time_step` used to bound `dt_proj.bias`.
72
+ time_step_max (`float`, *optional*, defaults to 0.1):
73
+ Maximum `time_step` used to bound `dt_proj.bias`.
74
+ time_step_init_scheme (`float`, *optional*, defaults to `"random"`):
75
+ Init scheme used for `dt_proj.weight`. Should be one of `["random","uniform"]`
76
+ time_step_floor (`float`, *optional*, defaults to 0.0001):
77
+ Minimum clamping value of the `dt_proj.bias` layer initialization.
78
+ rescale_prenorm_residual (`bool`, *optional*, defaults to `False`):
79
+ Whether or not to rescale `out_proj` weights when initializing.
80
+ use_cache (`bool`, *optional*, defaults to `True`):
81
+ Whether or not the cache should be used.
82
+ use_mambapy (`bool`, *optional*, defaults to `False`):
83
+ Determines the fallback strategy during training if the CUDA-based official implementation of Mamba is not avaiable. If `True`, the mamba.py implementation is used. If `False`, the naive and slower implementation is used. Consider switching to the naive version if memory is limited.
84
+
85
+
86
+ Example:
87
+
88
+ ```python
89
+ >>> from transformers import MambaConfig, MambaModel
90
+
91
+ >>> # Initializing a Mamba configuration
92
+ >>> configuration = MambaConfig()
93
+
94
+ >>> # Initializing a model (with random weights) from the configuration
95
+ >>> model = MambaModel(configuration)
96
+
97
+ >>> # Accessing the model configuration
98
+ >>> configuration = model.config
99
+ ```"""
100
+
101
+ model_type = "mamba"
102
+
103
+ def __init__(
104
+ self,
105
+ vocab_size=50280,
106
+ hidden_size=768,
107
+ state_size=16,
108
+ num_hidden_layers=32,
109
+ layer_norm_epsilon=1e-5,
110
+ pad_token_id=0,
111
+ bos_token_id=0,
112
+ eos_token_id=0,
113
+ expand=2,
114
+ conv_kernel=4,
115
+ use_bias=False,
116
+ use_conv_bias=True,
117
+ hidden_act="silu",
118
+ initializer_range=0.1,
119
+ residual_in_fp32=True,
120
+ time_step_rank="auto",
121
+ time_step_scale=1.0,
122
+ time_step_min=0.001,
123
+ time_step_max=0.1,
124
+ time_step_init_scheme="random",
125
+ time_step_floor=1e-4,
126
+ rescale_prenorm_residual=False,
127
+ use_cache=True,
128
+ use_mambapy=False,
129
+ **kwargs,
130
+ ):
131
+ self.vocab_size = vocab_size
132
+ self.hidden_size = hidden_size
133
+ self.state_size = state_size
134
+ self.num_hidden_layers = num_hidden_layers
135
+ self.layer_norm_epsilon = layer_norm_epsilon
136
+ self.conv_kernel = conv_kernel
137
+ self.expand = expand
138
+ self.intermediate_size = int(expand * self.hidden_size)
139
+ self.bos_token_id = bos_token_id
140
+ self.eos_token_id = eos_token_id
141
+ self.pad_token_id = pad_token_id
142
+ self.use_bias = use_bias
143
+ self.use_conv_bias = use_conv_bias
144
+ self.hidden_act = hidden_act
145
+ self.initializer_range = initializer_range
146
+ self.time_step_rank = math.ceil(self.hidden_size / 16) if time_step_rank == "auto" else time_step_rank
147
+ self.time_step_scale = time_step_scale
148
+ self.time_step_min = time_step_min
149
+ self.time_step_max = time_step_max
150
+ self.time_step_init_scheme = time_step_init_scheme
151
+ self.time_step_floor = time_step_floor
152
+ self.rescale_prenorm_residual = rescale_prenorm_residual
153
+ self.residual_in_fp32 = residual_in_fp32
154
+ self.use_cache = use_cache
155
+ self.use_mambapy = use_mambapy
156
+
157
+ super().__init__(bos_token_id=bos_token_id, eos_token_id=eos_token_id, pad_token_id=pad_token_id, **kwargs)
checkpoint-86570/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a35d49c52eaa741578a4034ba7e179ed25a8712927ab58cef88aa2477e2df01
3
+ size 386689624
checkpoint-86570/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43d8d41f4dee0c6ef7b05b32efe48ce821b63afa2ce7af08cf86833a63b8642a
3
+ size 773531386
checkpoint-86570/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fe44f0c623b17f8af8459feca83dfe00349dbd649112bdfbaafc4fa8770a5ae
3
+ size 14244