Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

DORAEMONG
/
PRO-STEP-PRM-8B

PEFT
Safetensors
English
process-reward-model
prm
retrieval-augmented-generation
lora
Model card Files Files and versions
xet
Community

Instructions to use DORAEMONG/PRO-STEP-PRM-8B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • PEFT

    How to use DORAEMONG/PRO-STEP-PRM-8B with PEFT:

    from peft import PeftModel
    from transformers import AutoModelForCausalLM
    
    base_model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-0528-Qwen3-8B")
    model = PeftModel.from_pretrained(base_model, "DORAEMONG/PRO-STEP-PRM-8B")
  • Notebooks
  • Google Colab
  • Kaggle
PRO-STEP-PRM-8B
186 MB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
DORAEMONG's picture
DORAEMONG
Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B)
c812dc1 verified 9 days ago
  • .gitattributes
    1.57 kB
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • README.md
    2.33 kB
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • adapter_config.json
    740 Bytes
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • adapter_model.safetensors
    175 MB
    xet
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • chat_template.jinja
    3.13 kB
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • special_tokens_map.json
    485 Bytes
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • tokenizer.json
    11.4 MB
    xet
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • tokenizer_config.json
    5.59 kB
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago
  • training_args.bin

    Detected Pickle imports (10)

    • "transformers.trainer_utils.SaveStrategy",
    • "accelerate.state.PartialState",
    • "torch.device",
    • "transformers.trainer_pt_utils.AcceleratorConfig",
    • "transformers.trainer_utils.IntervalStrategy",
    • "transformers.trainer_utils.SchedulerType",
    • "transformers.trainer_utils.HubStrategy",
    • "transformers.training_args.OptimizerNames",
    • "trl.trainer.sft_config.SFTConfig",
    • "accelerate.utils.dataclasses.DistributedType"

    How to fix it?

    6.29 kB
    xet
    Add PRO-STEP PRM (LoRA over DeepSeek-R1-0528-Qwen3-8B) 9 days ago