Upload ppo model
c473267 verified - 1.52 kB initial commit
- 605 Bytes Upload ppo model
- 2.51 kB Upload ppo model
- 1.32 kB Upload ppo model
- 194 Bytes Upload ppo model
- 1.67 MB Upload ppo model
- 3.09 GB Upload ppo model
- 613 Bytes Upload ppo model
- 4.69 kB Upload ppo model
training_args.bin Detected Pickle imports (33)
- "transformers.trainer_utils.SchedulerType",
- "transformers.models.qwen2.modeling_qwen2.Qwen2ForSequenceClassification",
- "transformers.models.qwen2.modeling_qwen2.Qwen2RMSNorm",
- "transformers.training_args.OptimizerNames",
- "transformers.models.qwen2.modeling_qwen2.Qwen2MLP",
- "transformers.trainer_utils.SaveStrategy",
- "torch.device",
- "accelerate.utils.dataclasses.DistributedType",
- "__builtin__.getattr",
- "torch.nn.modules.linear.Linear",
- "transformers.modeling_rope_utils._compute_default_rope_parameters",
- "collections.OrderedDict",
- "trl.trainer.ppo_config.PPOConfig",
- "transformers.generation.configuration_utils.GenerationConfig",
- "transformers.trainer_pt_utils.AcceleratorConfig",
- "transformers.models.qwen2.modeling_qwen2.Qwen2Model",
- "torch.nn.modules.container.ModuleList",
- "transformers.activations.SiLUActivation",
- "torch.bfloat16",
- "torch._utils._rebuild_parameter",
- "transformers.models.qwen2.modeling_qwen2.Qwen2ForCausalLM",
- "torch.FloatStorage",
- "transformers.models.qwen2.modeling_qwen2.Qwen2Attention",
- "transformers.models.qwen2.modeling_qwen2.Qwen2RotaryEmbedding",
- "torch.BFloat16Storage",
- "transformers.trainer_utils.IntervalStrategy",
- "transformers.models.qwen2.configuration_qwen2.Qwen2Config",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.qwen2.modeling_qwen2.Qwen2DecoderLayer",
- "accelerate.state.PartialState",
- "transformers.trainer_utils.HubStrategy",
- "__builtin__.set",
- "torch._utils._rebuild_tensor_v2"
How to fix it?
6.18 GB Upload ppo model - 3.38 MB Upload ppo model