repo_name stringlengths 8 65 | repo_url stringlengths 27 84 | license stringclasses 6
values | file_path stringlengths 6 85 | file_url stringlengths 83 181 | timestamp stringlengths 26 26 | reward_functions listlengths 1 20 | trainer_usages listlengths 1 23 |
|---|---|---|---|---|---|---|---|
dstackai/dstack | https://github.com/dstackai/dstack | Mozilla Public License 2.0 | examples/llms/deepseek/trl/amd/grpo_train.py | https://github.com/dstackai/dstack/blob/58181d1fe372488d9a64075c24d975935411f31d/examples/llms/deepseek/trl/amd/grpo_train.py | 2025-03-24T10:14:18.106059 | [
{
"name": "reward_len",
"code": "def reward_len(completions, **kwargs):\n return [abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"processing_class": ... |
philschmid/deep-learning-pytorch-huggingface | https://github.com/philschmid/deep-learning-pytorch-huggingface | MIT License | training/scripts/run_r1_grpo.py | https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/59b37973074de90004d10e5ff636f98160c9743a/training/scripts/run_r1_grpo.py | 2025-03-24T10:14:24.890615 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>...</think><answer>...</answer>\n Args:\n completions (list[str]): Generated outputs\n target (list[str]): Expected answers\n \n Retur... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "... |
huihuihenqiang/wechat-simulate-human | https://github.com/huihuihenqiang/wechat-simulate-human | Unknown | ft/deepseek_r1_train.py | https://github.com/huihuihenqiang/wechat-simulate-human/blob/26042f23d5c26501816a2d4ef498134e94349085/ft/deepseek_r1_train.py | 2025-03-24T10:14:27.153189 | [
{
"name": "mark_reward (from list item 0)",
"code": "def mark_reward(completions, **kwargs):\n responses = [completion[0]['content'] for completion in completions]\n return [mark_num(response) for response in responses]",
"label": "{\"label\": \"ANSWER_TYPE_VALIDATION\"}"
},
{
"name": "sof... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[mark_reward, soft_format_reward, hard_format_reward, digit_reward, correctness_reward]",
"args": "training_args",
"train_dataset": "data",
"eval_dataset": null,
"peft_config":... |
Doriandarko/MLX-GRPO | https://github.com/Doriandarko/MLX-GRPO | Unknown | mlx-grpo.py | https://github.com/Doriandarko/MLX-GRPO/blob/eaacf96e4ad464860144f52b9823408f0ae7c295/mlx-grpo.py | 2025-03-24T10:14:29.408549 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "MLXGRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "config",
"train_dataset": "dataset",
"eval_dataset"... |
michaelhla/pro-1 | https://github.com/michaelhla/pro-1 | Apache License 2.0 | train/unsloth-grpo.py | https://github.com/michaelhla/pro-1/blob/e205302deb82e971311125869e74efa4feb636fc/train/unsloth-grpo.py | 2025-03-24T10:14:31.663908 | [
{
"name": "stability_reward_func (from list item 0)",
"code": "def stability_reward_func(prompts, completions, sequences, orig_stabs, **kwargs):\n \"\"\"Custom reward function for stability optimization with LLM-based soft rewards\"\"\"\n rewards = []\n direct_extraction_success = 0\n lm_applier... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[stability_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"... |
transformerlab/transformerlab-api | https://github.com/transformerlab/transformerlab-api | GNU Affero General Public License v3.0 | transformerlab/plugins/unsloth_grpo_trainer/main.py | https://github.com/transformerlab/transformerlab-api/blob/b52bec9ee4707833a1f32cfe8130f6e7f618d52f/transformerlab/plugins/unsloth_grpo_trainer/main.py | 2025-03-24T10:14:33.958267 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c, start_thinking_string, end_thinking_string, start_answer_string, end_answer_string)... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, correctness_reward_func, int_reward_func, strict_format_reward_func, soft_format_reward_func]",
"args": "args",
"train_dataset": "dataset",
"eval_dataset": nul... |
JinSeoung-Oh/Reference | https://github.com/JinSeoung-Oh/Reference | Unknown | Reasoning/ReasoningModels.py | https://github.com/JinSeoung-Oh/Reference/blob/e49eb8aea5ea65f0c3b687ece28f075d392d8156/Reasoning/ReasoningModels.py | 2025-03-24T10:14:36.212741 | [
{
"name": "custom_reward_func (from list item 0)",
"code": "def custom_reward_func(prompts, completions, answer, min_reasoning_length=10, **kwargs) -> list[float]:\n responses = [completion[0]['content'] for completion in completions]\n q = prompts[0][-1]['content']\n extracted_responses_answer = [... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[custom_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"processin... |
lmassaron/Gemma-2-2B-IT-GRPO | https://github.com/lmassaron/Gemma-2-2B-IT-GRPO | Unknown | gemma-grpo.py | https://github.com/lmassaron/Gemma-2-2B-IT-GRPO/blob/23802c018aa1cb9ac74fa14bf2391769c44ebb2b/gemma-grpo.py | 2025-03-24T10:14:45.291361 | [
{
"name": "correctness_reward_func (from list item 0)",
"code": "def correctness_reward_func(completions, answer, **kwargs):\n \"\"\"Reward function that checks if the answer is correct.\"\"\"\n responses = [completion[0]['content'] for completion in completions]\n extracted_responses = [extract_la... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "params.MODEL_NAME",
"reward_funcs": "[correctness_reward_func, format_reward_func]",
"args": "training_args",
"train_dataset": "gsm8k_train",
"eval_dataset": null,
"peft_config": "peft_config",
"... |
yaosheng216/torch_demo | https://github.com/yaosheng216/torch_demo | Unknown | grpo/distillation_qwen.py | https://github.com/yaosheng216/torch_demo/blob/7c441b4fd4f4f71a62035761c206ed7aeba2439a/grpo/distillation_qwen.py | 2025-03-24T10:14:54.394361 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
erayalp808/GRPO-fine-tuning-turkish-gpt2-350m | https://github.com/erayalp808/GRPO-fine-tuning-turkish-gpt2-350m | Unknown | grpo_training.py | https://github.com/erayalp808/GRPO-fine-tuning-turkish-gpt2-350m/blob/5428820ca46cf074d97f957f126b3255a567c441/grpo_training.py | 2025-03-24T10:15:03.493678 | [
{
"name": "correctness_reward_func (from list item 0)",
"code": "def correctness_reward_func(prompts, completions, answer, **kwargs) -> list[float]:\n responses = [completion[0]['content'] for completion in completions]\n q = prompts[0][-1]['content']\n extracted_responses = [extract_final_answer(r... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "lora_model",
"reward_funcs": "[correctness_reward_func, strict_format_reward_func, soft_format_reward_func, xmlcount_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
... |
Asad-Shahab/sudokuLLM | https://github.com/Asad-Shahab/sudokuLLM | MIT License | finetune.py | https://github.com/Asad-Shahab/sudokuLLM/blob/4593b0f4b3d80f3afebf18653a279e6cea3b0068/finetune.py | 2025-03-24T10:15:24.265651 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Reward function for XML formatting details.\"\"\"\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
alxndrTL/gpu-rl | https://github.com/alxndrTL/gpu-rl | Unknown | grpo_gsm8k.py | https://github.com/alxndrTL/gpu-rl/blob/1f2bd13c9864049ec94e356f20f0ffb7a1f4b1e3/grpo_gsm8k.py | 2025-03-24T10:15:26.599891 | [
{
"name": "format_reasoning_reward (from list item 0)",
"code": "def format_reasoning_reward(prompts, completions, answer, **kwargs) -> list[float]:\n parsed_responses = parse_responses(completions)\n rewards = [0.5 if r['thinking_content'] and r['response'] else 0.0 for r in parsed_responses]\n re... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reasoning_reward, format_number_reward, accuracy_reward, log_rewards]",
"args": "training_args",
"train_dataset": "data",
"eval_dataset": null,
... |
Sam-de-Ham/finetuning-tests | https://github.com/Sam-de-Ham/finetuning-tests | Unknown | full_training_freeze.py | https://github.com/Sam-de-Ham/finetuning-tests/blob/7617ee1361314a054f353d2764affb6ace27ec50/full_training_freeze.py | 2025-03-24T10:15:28.888442 | [
{
"name": "reward_len",
"code": "def reward_len(completions, **kwargs):\n return [-abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B'",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_clas... |
Sam-de-Ham/finetuning-tests | https://github.com/Sam-de-Ham/finetuning-tests | Unknown | full_training_simple.py | https://github.com/Sam-de-Ham/finetuning-tests/blob/7617ee1361314a054f353d2764affb6ace27ec50/full_training_simple.py | 2025-03-24T10:15:31.105297 | [
{
"name": "reward_len",
"code": "def reward_len(completions, **kwargs):\n return [-abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B'",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_clas... |
summerspringwei/alpaca-lora-decompilation | https://github.com/summerspringwei/alpaca-lora-decompilation | Apache License 2.0 | models/llmcompiler/grpo_example.py | https://github.com/summerspringwei/alpaca-lora-decompilation/blob/3d5fb5344992dd9c6e8a6447feee89dc889921fd/models/llmcompiler/grpo_example.py | 2025-03-24T10:15:37.883093 | [
{
"name": "reward_len",
"code": "def reward_len(completions, **kwargs):\n return [-abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'Qwen/Qwen2-0.5B-Instruct'",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
... |
meetrais/LLM-Fine-Tuning | https://github.com/meetrais/LLM-Fine-Tuning | Unknown | Qwen2.5_3B_GRPO.py | https://github.com/meetrais/LLM-Fine-Tuning/blob/d5e226e401894795c38e671fef4e117254cfeb51/Qwen2.5_3B_GRPO.py | 2025-03-24T10:15:47.125877 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"ANSWER_TYPE_VALIDATION\"}"
},
... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
MarcoTuc/xent | https://github.com/MarcoTuc/xent | Unknown | llama-grpo-xent/lab.py | https://github.com/MarcoTuc/xent/blob/2a46ba203123eda2e8f195025309af53c0899555/llama-grpo-xent/lab.py | 2025-03-24T10:15:56.616591 | [
{
"name": "dummy_reward (from list item 0)",
"code": "def dummy_reward(completions, **kwargs):\n responses = [completion[0]['content'] for completion in completions]\n print(f'{len(responses)} completions have been produced')\n for response in responses:\n print(response)\n print('\\n... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[dummy_reward]",
"args": "training_args",
"train_dataset": "dataset['train']",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"proces... |
Sam-de-Ham/finetuning-tests | https://github.com/Sam-de-Ham/finetuning-tests | Unknown | full_training_simple_grpo.py | https://github.com/Sam-de-Ham/finetuning-tests/blob/7617ee1361314a054f353d2764affb6ace27ec50/full_training_simple_grpo.py | 2025-03-24T10:16:01.156066 | [
{
"name": "reward_len",
"code": "def reward_len(completions, **kwargs):\n return [-abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"processing_class": ... |
The-Swarm-Corporation/AgentGym | https://github.com/The-Swarm-Corporation/AgentGym | MIT License | grpo_example_two.py | https://github.com/The-Swarm-Corporation/AgentGym/blob/baa5184fdbdc48bd64f5bde17909fa8c482c2851/grpo_example_two.py | 2025-03-24T10:16:07.952736 | [
{
"name": "reward_len",
"code": "def reward_len(completions, **kwargs):\n return [abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'Qwen/Qwen2-0.5B-Instruct'",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
... |
haoruilee/Awesome-GRPO-training-example | https://github.com/haoruilee/Awesome-GRPO-training-example | Unknown | GRPO-Llama-1B.py | https://github.com/haoruilee/Awesome-GRPO-training-example/blob/1a0cf86a50ed4d4602a1b28dbe44c23a83573a11/GRPO-Llama-1B.py | 2025-03-24T10:16:12.544588 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
xiiiiiiiiii/strategicLearning | https://github.com/xiiiiiiiiii/strategicLearning | Unknown | train_grpo_gsm8k.py | https://github.com/xiiiiiiiiii/strategicLearning/blob/f92d0b57e9f7727e0cdad8a5f3ee04b163071ab3/train_grpo_gsm8k.py | 2025-03-24T10:16:14.791293 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
datawhalechina/unlock-deepseek | https://github.com/datawhalechina/unlock-deepseek | Unknown | Datawhale-R1/train_Datawhale-R1_unsloth.py | https://github.com/datawhalechina/unlock-deepseek/blob/7bfaaf6f93dcf2249525392d5310881a58f6f79b/Datawhale-R1/train_Datawhale-R1_unsloth.py | 2025-03-24T10:16:21.520168 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": null,
"reward_proce... |
Manto/chess-reasoning-zero | https://github.com/Manto/chess-reasoning-zero | MIT License | qwen-1.5b.countdown.py | https://github.com/Manto/chess-reasoning-zero/blob/a887574cccdee0a752c80181ce3d6f428acbd52a/qwen-1.5b.countdown.py | 2025-03-24T10:16:23.784558 | [
{
"name": "countdown_reward_func",
"code": "def countdown_reward_func(prompts, completions, ground_truth, **kwargs) -> list[float]:\n scores = []\n for prompt, completion, truth in zip(prompts, completions, ground_truth):\n score = compute_score(completion[0]['content'], truth)\n scores.... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "countdown_reward_func",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "lora_config",
"reward_processing_classe... |
summerspringwei/alpaca-lora-decompilation | https://github.com/summerspringwei/alpaca-lora-decompilation | Apache License 2.0 | models/llmcompiler/grpo_exebench.py | https://github.com/summerspringwei/alpaca-lora-decompilation/blob/3d5fb5344992dd9c6e8a6447feee89dc889921fd/models/llmcompiler/grpo_exebench.py | 2025-03-24T10:16:28.246568 | [
{
"name": "reward_compilation",
"code": "def reward_compilation(completions, **kwargs):\n original_input = [{} for _ in range(len(completions))]\n predict_list_length = []\n for k, v in kwargs.items():\n for i in range(len(v)):\n original_input[i][k] = v[i]\n validation_list = ... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_path",
"reward_funcs": "reward_compilation",
"args": "training_args",
"train_dataset": "exebench_dataset",
"eval_dataset": null,
"peft_config": "lora_config",
"reward_processing_classes": n... |
Oxen-AI/GRPO-With-Cargo-Feedback | https://github.com/Oxen-AI/GRPO-With-Cargo-Feedback | MIT License | train.py | https://github.com/Oxen-AI/GRPO-With-Cargo-Feedback/blob/11d0f570898f5764d9a366898ccb3da4c745a378/train.py | 2025-03-24T10:16:32.752977 | [
{
"name": "cargo_build_reward_func (from list item 0)",
"code": "@experiment.log(f'cargo_build_rewards.jsonl')\ndef cargo_build_reward_func(prompts, completions, **kwargs) -> list[float]:\n responses = [completion[0]['content'] for completion in completions]\n extracted_answers = [extract_rust_code(r)... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[cargo_build_reward_func, cargo_clippy_reward_func, cargo_test_reward_func, non_empty_reward_func, test_block_count_reward_func, tests_have_asserts_reward_func]",
"args": "training_args",
... |
awdemos/awdemos | https://github.com/awdemos/awdemos | Unknown | demos/llm/alpha_maze_finder_grpo/alphamaze_solver.py | https://github.com/awdemos/awdemos/blob/f59b9335803e762618c92ec7b6e655a693607555/demos/llm/alpha_maze_finder_grpo/alphamaze_solver.py | 2025-03-24T10:16:37.352898 | [
{
"name": "maze_reward (from list item 0)",
"code": "def maze_reward(completions, prompts, **kwargs):\n rewards = []\n for completion in completions:\n game = MazeGame()\n moves = completion.split()\n for move in moves:\n _, done = game.move(move)\n if done:\... | [
{
"trainer_type": "CustomGRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[maze_reward]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"proc... |
HarleyCoops/TrainingRun | https://github.com/HarleyCoops/TrainingRun | Unknown | grpo_demo.py | https://github.com/HarleyCoops/TrainingRun/blob/371054d5438de5f97e2b54d8bdfd8deebbd3fe85/grpo_demo.py | 2025-03-24T10:16:39.741640 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
erfanzar/EasyDeL | https://github.com/erfanzar/EasyDeL | Apache License 2.0 | easydel/scripts/finetune/gsm8k_grpo.py | https://github.com/erfanzar/EasyDeL/blob/64a77804783cb790bff1f8c744163915f55aea5f/easydel/scripts/finetune/gsm8k_grpo.py | 2025-03-24T10:16:46.626204 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": null,
"train_dataset": "train_dataset",
"eval_dataset":... |
benglard/consciousness | https://github.com/benglard/consciousness | Unknown | llm_safety.py | https://github.com/benglard/consciousness/blob/dc7e58655c53bb34d2bc9c1b6fb0c2f26a77b339/llm_safety.py | 2025-03-24T10:16:48.956696 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"LENGTH_BASED\"}"
},
{
"nam... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func, smol_model_predictor]",
"args": "training_args",
"train_dataset": "data... |
nnebp/GPRO-s-game-of-life | https://github.com/nnebp/GPRO-s-game-of-life | Unknown | train_gsm8k_mps.py | https://github.com/nnebp/GPRO-s-game-of-life/blob/169c46a84cde5f6deb941bed685fdab0ffd1e11b/train_gsm8k_mps.py | 2025-03-24T10:16:51.195643 | [
{
"name": "correctness_reward (from list item 0)",
"code": "def correctness_reward(prompts, completions, answer, **kwargs):\n \"\"\"Reward function for correct answers\"\"\"\n responses = [completion[0]['content'] for completion in completions]\n extracted = [extract_xml_answer(r) for r in response... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "args.model_name",
"reward_funcs": "[correctness_reward, format_reward, numeric_answer_reward]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": "peft_config",
... |
jianzhnie/Open-R1 | https://github.com/jianzhnie/Open-R1 | Apache License 2.0 | examples/grpo_gsm8k.py | https://github.com/jianzhnie/Open-R1/blob/cbcaa40cf795a99a394db4806685018d06452c23/examples/grpo_gsm8k.py | 2025-03-24T10:16:53.561361 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
jiangqx0225/llm_run_file | https://github.com/jiangqx0225/llm_run_file | Unknown | unsloth_grpo.py | https://github.com/jiangqx0225/llm_run_file/blob/c698e03108035ef3511df5afcc7f2029d25e90a7/unsloth_grpo.py | 2025-03-24T10:16:55.859060 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
idanshen/multi_ref | https://github.com/idanshen/multi_ref | Unknown | gsm8k_grpo.py | https://github.com/idanshen/multi_ref/blob/53c9484f9963d0eb6c320eb7741ca08018aaa350/gsm8k_grpo.py | 2025-03-24T10:17:00.417588 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
yhfgyyf/GRPO_script | https://github.com/yhfgyyf/GRPO_script | Unknown | grpo_bleu.py | https://github.com/yhfgyyf/GRPO_script/blob/af9b2066201156eb07b270f60ccb50b552611249/grpo_bleu.py | 2025-03-24T10:17:02.749926 | [
{
"name": "assistant_format_count_reward (from list item 0)",
"code": "def assistant_format_count_reward(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_assistant_format(c) for c in contents]",
"label": "{\"label\": \"FOR... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[assistant_format_count_reward, assistant_format_reward_func, soft_assistant_format_reward_func, bleu_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_datase... |
Legionof7/GRPOdx | https://github.com/Legionof7/GRPOdx | Unknown | Tic Tac Toe Game.py | https://github.com/Legionof7/GRPOdx/blob/b039810b169606493a1e3202dbf1b4d9cec02942/Tic%20Tac%20Toe%20Game.py | 2025-03-24T10:17:18.716467 | [
{
"name": "game_reward (from list item 0)",
"code": "def game_reward(completions, **kwargs) -> list[float]:\n return [0.0 for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
},
{
"name": "game_reward (from [game_reward])",
"code": "def game_reward(completions, **kwar... | [
{
"trainer_type": "CustomGRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[game_reward]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"proc... |
avinashreddydev/low-thinking | https://github.com/avinashreddydev/low-thinking | Apache License 2.0 | src/grpo_train_math.py | https://github.com/avinashreddydev/low-thinking/blob/95a2a8b79d7a863174e5ed33ed199a4116490f8a/src/grpo_train_math.py | 2025-03-24T10:17:20.948825 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Reward function that checks if the completion has the correct format.\"\"\"\n pattern = '^<reasoning>(?:(?!</reasoning>).)*</reasoning>\\\\n<answer>(?:(?!</answer>).)*</ans... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset_train",
"eval_dataset": "dataset_test",
"peft_config": null,
"reward_pr... |
zzlzero/CodeLess | https://github.com/zzlzero/CodeLess | Unknown | run_grpo.py | https://github.com/zzlzero/CodeLess/blob/b2cb84a16a1764945fd93f4ca6d7fb39a55858b1/run_grpo.py | 2025-03-24T10:17:23.255479 | [
{
"name": "len_reward_func (from list item 0)",
"code": "def len_reward_func(completions, **kwargs):\n rewards = []\n max_len = max((len(completion) for completion in completions))\n for completion in completions:\n generation = task.postprocess_generation(completion)\n rewards.append... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[len_reward_func, correct_code_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": ... |
menloresearch/visual-thinker | https://github.com/menloresearch/visual-thinker | Unknown | training/grpo_stage.py | https://github.com/menloresearch/visual-thinker/blob/bb74ee6fbf72b34321edcf2bb958921f694ab622/training/grpo_stage.py | 2025-03-24T10:17:27.798388 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> List[float]:\n \"\"\"\n Reward function based on proper XML tag usage.\n \n Args:\n completions: Model completions\n \n Returns:\n List of reward scores\n \... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward... |
vpareek2/llm-experiments | https://github.com/vpareek2/llm-experiments | MIT License | llama-r1/grpo_trl.py | https://github.com/vpareek2/llm-experiments/blob/188644b99675411b0368d92c0cd29ddec0a0821f/llama-r1/grpo_trl.py | 2025-03-24T10:17:30.134516 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
bbirdxr/GRPO-Qwen2.5-7B-Medicine | https://github.com/bbirdxr/GRPO-Qwen2.5-7B-Medicine | Unknown | trl_grpo.py | https://github.com/bbirdxr/GRPO-Qwen2.5-7B-Medicine/blob/7617ccef6880d8cc13137df8def964b619143665/trl_grpo.py | 2025-03-24T10:17:32.445182 | [
{
"name": "reward_think_ratio (from list item 0)",
"code": "def reward_think_ratio(completions, **kwargs):\n scores = []\n for completion in completions:\n think_count = completion.count('<think>')\n think_end_count = completion.count('</think>')\n score = -abs(think_count - think... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[reward_think_ratio, similarity_sentence_score, correct_score]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "peft... |
uukuguy/Replicate-R1 | https://github.com/uukuguy/Replicate-R1 | MIT License | tasks/mini-r1/mini_r1.py | https://github.com/uukuguy/Replicate-R1/blob/e050c71aebcdffc75b2ec634bb82e4897510606a/tasks/mini-r1/mini_r1.py | 2025-03-24T10:17:34.678954 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>...</think><answer>...</answer>\n Args:\n completions (list[str]): Generated outputs\n target (list[str]): Expected answers\n \n Retur... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_config.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config":... |
chunhuizhang/llm_rl | https://github.com/chunhuizhang/llm_rl | Unknown | tutorials/r1-k1.5/training_grpo.py | https://github.com/chunhuizhang/llm_rl/blob/1c2b9baff36b219076b07f5aeeb4f748d7461388/tutorials/r1-k1.5/training_grpo.py | 2025-03-24T10:17:41.418674 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
TianyiPeng/reproduce-R1-countdown | https://github.com/TianyiPeng/reproduce-R1-countdown | Apache License 2.0 | run_r1_grpo.py | https://github.com/TianyiPeng/reproduce-R1-countdown/blob/42723af056336392ca33b4bdc687253f7aa99450/run_r1_grpo.py | 2025-03-24T10:17:45.946440 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>...</think><answer>...</answer>\n Args:\n completions (list[str]): Generated outputs\n target (list[str]): Expected answers\n \n Retur... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "... |
infinitylogesh/LLM-RL-experiments | https://github.com/infinitylogesh/LLM-RL-experiments | Unknown | train.py | https://github.com/infinitylogesh/LLM-RL-experiments/blob/af420409ffd0af7a519d0671ee5fcedb7c767b1c/train.py | 2025-03-24T10:17:48.243460 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>...</think><answer>...</answer>\n Args:\n completions (list[str]): Generated outputs\n target (list[str]): Expected answers\n \n Retur... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "... |
manhdo249/T5_TrainDPO | https://github.com/manhdo249/T5_TrainDPO | Unknown | TrainingDPO/training/scripts/run_r1_grpo.py | https://github.com/manhdo249/T5_TrainDPO/blob/04588dadb5ee0e20ee0c01143b622630cfbd232e/TrainingDPO/training/scripts/run_r1_grpo.py | 2025-03-24T10:17:50.515846 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>...</think><answer>...</answer>\n Args:\n completions (list[str]): Generated outputs\n target (list[str]): Expected answers\n \n Retur... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "... |
zhuopanyang/algorithm_learning | https://github.com/zhuopanyang/algorithm_learning | Unknown | github_model_learning/unlock-deepseek-main/Datawhale-R1/train_Datawhale-R1_unsloth.py | https://github.com/zhuopanyang/algorithm_learning/blob/1d124a74926e8159b696b5fff18fb7f4b1d9bb80/github_model_learning/unlock-deepseek-main/Datawhale-R1/train_Datawhale-R1_unsloth.py | 2025-03-24T10:18:08.865791 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": null,
"reward_proce... |
minio/blog-assets | https://github.com/minio/blog-assets | Creative Commons Attribution 4.0 International | rl-with-aihub/main.py | https://github.com/minio/blog-assets/blob/d6a56ffcb32ae878e77a8c6702dcbf38d29a7950/rl-with-aihub/main.py | 2025-03-24T10:18:29.499038 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
scchy/RL | https://github.com/scchy/RL | MIT License | src/LLMRL/train_DataWhale-R1.py | https://github.com/scchy/RL/blob/38d4e72222d088bdaf82ca1807f41d6d293b67c7/src/LLMRL/train_DataWhale-R1.py | 2025-03-24T10:18:40.917629 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions: List[AnyStr], **kwargs) -> List[float]:\n \"\"\" \n 格式奖励函数,检查模型输出格式是否匹配:<think>...</think><answer>...</answer>\n \n args:\n completions: 生成的输出\n return:\n 奖励分数\n \"\"\"\n reward... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func, thought_len_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset... |
wyf3/llm_related | https://github.com/wyf3/llm_related | Unknown | deepseek_learn/deepseek_r1_train/deepseek_r1_train.py | https://github.com/wyf3/llm_related/blob/b24892f54fb556a9ba0ebdb59207dfe785d6703c/deepseek_learn/deepseek_r1_train/deepseek_r1_train.py | 2025-03-24T10:18:43.153796 | [
{
"name": "mark_reward (from list item 0)",
"code": "def mark_reward(completions, **kwargs):\n responses = [completion[0]['content'] for completion in completions]\n return [mark_num(response) for response in responses]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"name": "soft_format_... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[mark_reward, soft_format_reward, hard_format_reward, digit_reward, correctness_reward]",
"args": "training_args",
"train_dataset": "data",
"eval_dataset": null,
"peft_config":... |
Bhavinrathava/UnRLearning | https://github.com/Bhavinrathava/UnRLearning | Unknown | src/training.py | https://github.com/Bhavinrathava/UnRLearning/blob/740d713668f4c6fb0b34b19c2157106b0ae6b3b0/src/training.py | 2025-03-24T10:18:56.853652 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>\n <isRelated> 0 or 1 depending if answer is related to Harry Potter </isRelated>\n <normalAnswer> generate normal answer </normalAnswer>\n <anchorWo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_config.model_name_or_path",
"reward_funcs": "[format_reward_func, answer_reward_function]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config... |
OpenCSGs/opencsg-r1 | https://github.com/OpenCSGs/opencsg-r1 | Unknown | src/full_train_grpo.py | https://github.com/OpenCSGs/opencsg-r1/blob/7704b7b4856d193b043df273bc586b56d66daaf4/src/full_train_grpo.py | 2025-03-24T10:19:01.366184 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": n... |
minwukim/VRL | https://github.com/minwukim/VRL | Unknown | train_qwen0.5b_grpo.py | https://github.com/minwukim/VRL/blob/e6aa1718019db1acce0b2c49994827bb88be7e4d/train_qwen0.5b_grpo.py | 2025-03-24T10:19:03.619079 | [
{
"name": "reward_len",
"code": "def reward_len(completions, **kwargs):\n return [-abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'Qwen/Qwen2-0.5B-Instruct'",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
... |
SiliangZeng/R1-Experiment | https://github.com/SiliangZeng/R1-Experiment | Unknown | grpo_demo.py | https://github.com/SiliangZeng/R1-Experiment/blob/9981999b2af0ac73d967517cdb987cb9355aef4b/grpo_demo.py | 2025-03-24T10:19:08.189234 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
rocke2020/RLHF-exercise | https://github.com/rocke2020/RLHF-exercise | MIT License | grpo/simple_gsm8k/train.py | https://github.com/rocke2020/RLHF-exercise/blob/fe27dd4b73236508d7412d7520137a8633a862b0/grpo/simple_gsm8k/train.py | 2025-03-24T10:19:21.957208 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"FORMAT_ADHERENCE\"}"
},
{
... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
OpenCSGs/opencsg-r1 | https://github.com/OpenCSGs/opencsg-r1 | Unknown | src/lora_train_grpo.py | https://github.com/OpenCSGs/opencsg-r1/blob/7704b7b4856d193b043df273bc586b56d66daaf4/src/lora_train_grpo.py | 2025-03-24T10:19:24.218927 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": null,
"reward_proce... |
syafiq/reasoning | https://github.com/syafiq/reasoning | Unknown | src/grpo_qwen.py | https://github.com/syafiq/reasoning/blob/6130b0d2b0dc71a446de9d5b5d48678c101baf7f/src/grpo_qwen.py | 2025-03-24T10:19:35.772726 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Reward function that checks if the completion has the expected XML format.\"\"\"\n pattern = '<reasoning>.*?</reasoning>\\\\s*<answer>.*?</answer>'\n responses = [comple... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, vulnerability_identification_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "filtered_dataset",
"eval_dataset": null,
"peft_con... |
Qsingle/open-medical-r1 | https://github.com/Qsingle/open-medical-r1 | Apache License 2.0 | src/open_r1/grpo.py | https://github.com/Qsingle/open-medical-r1/blob/a41209173d3adf08d4928db64cafaff255ab34d8/src/open_r1/grpo.py | 2025-03-24T10:19:42.620703 | [
{
"name": "Potential lambda reward: lambda_0",
"code": "lambda: ['accuracy', 'format', 'xmlcount_reward_func']",
"label": "{\"label\": \"ANSWER_CORRECTNESS\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "reward_funcs",
"args": "training_args",
"train_dataset": "dataset[script_args.dataset_train_split]",
"eval_dataset": "dataset[script_args.dataset_test_split] ... |
cttmayi/AIDemo | https://github.com/cttmayi/AIDemo | Unknown | fine-tune/deepseek_r1/v5_u_grpo.py | https://github.com/cttmayi/AIDemo/blob/6d98e58035e5478966a1a76241c680210c6f942c/fine-tune/deepseek_r1/v5_u_grpo.py | 2025-03-24T10:19:44.883095 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
Eric-is-good/happyR1 | https://github.com/Eric-is-good/happyR1 | Unknown | grpo_train.py | https://github.com/Eric-is-good/happyR1/blob/8688febba25995132ab51a0ccac4191fd26c75a5/grpo_train.py | 2025-03-24T10:19:47.144650 | [
{
"name": "length_reward_func (from list item 0)",
"code": "def length_reward_func(completions, **kwargs):\n score = [float(len(completion[0]['content'])) * 0.002 for completion in completions]\n return score",
"label": "{\"label\": \"LENGTH_BASED\"}"
},
{
"name": "format_reward_func (from... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'Qwen/Qwen2.5-3B-Instruct'",
"reward_funcs": "[length_reward_func, format_reward_func, answer_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": nu... |
ZenWisty/SelfLearnAssis_BasedOnLanguageModel | https://github.com/ZenWisty/SelfLearnAssis_BasedOnLanguageModel | MIT License | doc/DLReasoningDeepSeekR1/Reproduce_DeepSeek_r1.py | https://github.com/ZenWisty/SelfLearnAssis_BasedOnLanguageModel/blob/c003a35770af43612821fa1eaf3ccb2e54455047/doc/DLReasoningDeepSeekR1/Reproduce_DeepSeek_r1.py | 2025-03-24T10:19:49.416490 | [
{
"name": "mark_reward (from list item 0)",
"code": "def mark_reward(completions, **kwargs):\n responses = [completion[0]['content'] for completion in completions]\n return [mark_num(response) for response in responses]",
"label": "{\"label\": \"ANSWER_TYPE_VALIDATION\"}"
},
{
"name": "sof... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[mark_reward, soft_format_reward, hard_format_reward, digit_reward, correctness_reward]",
"args": "training_args",
"train_dataset": "data",
"eval_dataset": null,
"peft_config":... |
jasonacox/ProtosAI | https://github.com/jasonacox/ProtosAI | MIT License | llm/grpo.py | https://github.com/jasonacox/ProtosAI/blob/44e93e6acc9379be0a821a5d54b5f123bba91c67/llm/grpo.py | 2025-03-24T10:19:58.428639 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
jianzhnie/Open-R1 | https://github.com/jianzhnie/Open-R1 | Apache License 2.0 | examples/grpo_demo.py | https://github.com/jianzhnie/Open-R1/blob/cbcaa40cf795a99a394db4806685018d06452c23/examples/grpo_demo.py | 2025-03-24T10:20:09.148792 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>...</think><answer>...</answer>\n Args:\n completions (list[str]): Generated outputs\n target (list[str]): Expected answers\n\n Returns:\n ... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "... |
nnebp/GPRO-s-game-of-life | https://github.com/nnebp/GPRO-s-game-of-life | Unknown | train_gol_grpo.py | https://github.com/nnebp/GPRO-s-game-of-life/blob/169c46a84cde5f6deb941bed685fdab0ffd1e11b/train_gol_grpo.py | 2025-03-24T10:20:16.161368 | [
{
"name": "correctness_reward (from list item 0)",
"code": "def correctness_reward(prompts, completions, answer, **kwargs):\n \"\"\"Reward function for correct Game of Life next states\"\"\"\n responses = [completion[0]['content'] for completion in completions]\n extracted = [extract_xml_answer(r) ... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "MODEL_NAME",
"reward_funcs": "[correctness_reward, format_reward, reasoning_quality_reward]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "val_dataset",
"peft_config": "p... |
tylerthecoder/func-ctrl-demo | https://github.com/tylerthecoder/func-ctrl-demo | Unknown | server/src/training/rl_agent.py | https://github.com/tylerthecoder/func-ctrl-demo/blob/25356e8bcab34c7b43625f721ef6500ad056a265/server/src/training/rl_agent.py | 2025-03-24T10:20:18.493852 | [
{
"name": "custom_reward_function",
"code": "def custom_reward_function(messages: List[Dict[str, str]]) -> float:\n return float(sum((len(msg['content']) for msg in messages)))",
"label": "{\"label\": \"LENGTH_BASED\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "custom_reward_function",
"args": "training_args",
"train_dataset": "None",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"processing... |
ralphbutler/LLM_misc | https://github.com/ralphbutler/LLM_misc | Unknown | reinforcement_learning_for_reasoning/train_numina.py | https://github.com/ralphbutler/LLM_misc/blob/94473185408f416ce762e8510678043cf1912486/reinforcement_learning_for_reasoning/train_numina.py | 2025-03-24T10:20:20.752790 | [
{
"name": "format_reward (from list item 0)",
"code": "def format_reward(completions, **kwargs):\n \"\"\"Reward function that checks if the completion has a specific format.\"\"\"\n pattern = '^<think>.*?</think>\\\\s*<answer>.*?</answer>$'\n completion_contents = [completion[0]['content'] for comp... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward, accuracy_reward]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null... |
bdsaglam/pipeline-grpo | https://github.com/bdsaglam/pipeline-grpo | Unknown | nbs/train_unsloth.py | https://github.com/bdsaglam/pipeline-grpo/blob/a70823fda7e3e704cb8f5372b3e7435f50280cca/nbs/train_unsloth.py | 2025-03-24T10:20:37.056048 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
chloeji8888/deepchess | https://github.com/chloeji8888/deepchess | Unknown | chess_gym_colab.py | https://github.com/chloeji8888/deepchess/blob/31a2877efa6441ba312db8b031a961383a699ddb/chess_gym_colab.py | 2025-03-24T10:20:50.857420 | [
{
"name": "reward_function",
"code": "def reward_function(prompts, completions):\n rewards = []\n samples_to_log = []\n env = ChessEnv(stockfish_path=stockfish_path)\n try:\n for prompt, completion in zip(prompts, completions):\n try:\n env._init_engine()\n ... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "reward_function",
"args": "grpo_config",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": "peft_config",
"reward_processing_classes": null,
"process... |
afirez/AlphaChatGPT | https://github.com/afirez/AlphaChatGPT | Unknown | alphachatgpt/case_09_deepseek_r1/case_01_llm_grpo/grpo_demo.py | https://github.com/afirez/AlphaChatGPT/blob/2f585ad1043f3792ad8a63a81dd88ed42996c648/alphachatgpt/case_09_deepseek_r1/case_01_llm_grpo/grpo_demo.py | 2025-03-24T10:20:53.111999 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
Named666/AlphaAnon | https://github.com/Named666/AlphaAnon | Unknown | grpo.py | https://github.com/Named666/AlphaAnon/blob/ce04ff76f5fdebd2273994bd492e888676411563/grpo.py | 2025-03-24T10:21:08.904057 | [
{
"name": "reward_wrapper (from list item 0)",
"code": "def reward_wrapper(prompts, completions, **kwargs):\n rewards = []\n for prompt, completion in zip(prompts, completions):\n dataset_completion = prompt_to_completion.get(prompt, '')\n reward = reward_function(prompt, completion, dat... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[reward_wrapper]",
"args": "training_args",
"train_dataset": "dataset['train']",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"proc... |
dotieuthien/test-modal | https://github.com/dotieuthien/test-modal | Unknown | test_r1/train_example.py | https://github.com/dotieuthien/test-modal/blob/2de0aaf7aef7c114be7803b36c41a013c217a5ca/test_r1/train_example.py | 2025-03-24T10:21:20.259173 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, target, **kwargs):\n \"\"\"\n Format: <think>...</think><answer>...</answer>\n Args:\n completions (list[str]): Generated outputs\n target (list[str]): Expected answers\n ... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": "... |
datawhalechina/unlock-deepseek | https://github.com/datawhalechina/unlock-deepseek | Unknown | Datawhale-R1/train_Datawhale-R1.py | https://github.com/datawhalechina/unlock-deepseek/blob/7bfaaf6f93dcf2249525392d5310881a58f6f79b/Datawhale-R1/train_Datawhale-R1.py | 2025-03-24T10:22:00.930207 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": n... |
mesolitica/malaya | https://github.com/mesolitica/malaya | MIT License | session/small-malaysian-reasoning/train_grpo.py | https://github.com/mesolitica/malaya/blob/0a1bfd89e56046f8a6c52d6c193381a7bf6ee25b/session/small-malaysian-reasoning/train_grpo.py | 2025-03-24T10:22:05.407633 | [
{
"name": "length_reward_func (from list item 0)",
"code": "def length_reward_func(completions, **kwargs):\n \"\"\"Reward function that gives higher scores to longer completions.\"\"\"\n return [float(len(completion[0]['content'].split()) / 4096) for completion in completions]",
"label": "{\"label... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[length_reward_func, format_reward_func, correct_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_pr... |
Zeyi-Lin/easy-r1 | https://github.com/Zeyi-Lin/easy-r1 | Unknown | train.py | https://github.com/Zeyi-Lin/easy-r1/blob/5cdca2dedf05b567e941effc97b68d4d989698df/train.py | 2025-03-24T10:22:09.265501 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"\n Reward function that counts the number of XML tags in the completion.\n 计算文本中XML标签的数量。\n \"\"\"\n contents = [completion[0]['content'] for completion in com... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
manavgup/grpo_granite | https://github.com/manavgup/grpo_granite | Apache License 2.0 | src/base_trainer.py | https://github.com/manavgup/grpo_granite/blob/23bf29637b46736c2e949a7a1bc3a5120f26dc74/src/base_trainer.py | 2025-03-24T10:22:11.515084 | [
{
"name": "get_reward_functions (from self.get_reward_functions())",
"code": "@abstractmethod\ndef get_reward_functions(self) -> List:\n \"\"\"Get list of reward functions.\"\"\"\n pass",
"label": "{\"label\": \"DEBUGGING\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "self.model_args.model_name_or_path",
"reward_funcs": "self.get_reward_functions()",
"args": "self.training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_pr... |
pegasi-ai/feather | https://github.com/pegasi-ai/feather | GNU Affero General Public License v3.0 | examples/deepseek_1_5b_finqa_reasoner.py | https://github.com/pegasi-ai/feather/blob/bed61853c927a65541ed0e66844af9e0fea02b0d/examples/deepseek_1_5b_finqa_reasoner.py | 2025-03-24T10:22:20.754689 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Calculate granular rewards based on XML tag counts and formatting.\"\"\"\n if not completions:\n return [0.0]\n contents = [completion[0]['content'] if isinst... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset... |
uukuguy/Replicate-R1 | https://github.com/uukuguy/Replicate-R1 | MIT License | tasks/unsloth_grpo/unsloth_grpo.py | https://github.com/uukuguy/Replicate-R1/blob/e050c71aebcdffc75b2ec634bb82e4897510606a/tasks/unsloth_grpo/unsloth_grpo.py | 2025-03-24T10:22:23.074378 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
rueckstiess/qurious | https://github.com/rueckstiess/qurious | Unknown | qurious/llms/grpo_grid.py | https://github.com/rueckstiess/qurious/blob/572c59c808f2ab606483eec88f4305030f59ce01/qurious/llms/grpo_grid.py | 2025-03-24T10:22:27.572923 | [
{
"name": "reward_goal_reached (from list item 0)",
"code": "def reward_goal_reached(completions, **kwargs):\n rewards = []\n for i, completion in enumerate(completions):\n reward = 0.0\n _, numeric_actions = extract_actions_from_responses(completion)\n example = {k: v[i] for k, v... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[reward_goal_reached, reward_illegal_actions]",
"args": "training_args",
"train_dataset": "prompt_dataset['train']",
"eval_dataset": "prompt_dataset['eval']",
"peft_config": nu... |
zhuopanyang/algorithm_learning | https://github.com/zhuopanyang/algorithm_learning | Unknown | model/deepseek_sft.py | https://github.com/zhuopanyang/algorithm_learning/blob/1d124a74926e8159b696b5fff18fb7f4b1d9bb80/model/deepseek_sft.py | 2025-03-24T10:22:29.809102 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func, thought_len_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset... |
syafiq/reasoning | https://github.com/syafiq/reasoning | Unknown | src/grpo_phi.py | https://github.com/syafiq/reasoning/blob/6130b0d2b0dc71a446de9d5b5d48678c101baf7f/src/grpo_phi.py | 2025-03-24T10:22:34.311286 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Reward function that checks if the completion has the expected XML format.\"\"\"\n pattern = '<reasoning>.*?</reasoning>\\\\s*<answer>.*?</answer>'\n responses = [comple... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, vulnerability_identification_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "filtered_dataset",
"eval_dataset": null,
"peft_con... |
ShaohonChen/ascend_r1_turtorial | https://github.com/ShaohonChen/ascend_r1_turtorial | Unknown | train_r1_grpo.py | https://github.com/ShaohonChen/ascend_r1_turtorial/blob/370f7a816b0156beb19043262d24bdb03c6cdef5/train_r1_grpo.py | 2025-03-24T10:22:38.851223 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": n... |
Karthik-Dulam/nano-zero | https://github.com/Karthik-Dulam/nano-zero | Unknown | grpo_unsloth.py | https://github.com/Karthik-Dulam/nano-zero/blob/36e795be9ef70202d7daadb2c2ef754ef63144c8/grpo_unsloth.py | 2025-03-24T10:22:43.381652 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, prompts=None, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n rewards = [count_xml(c) for c in contents]\n log_response(prompts, completions, 'xml_co... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward... |
benedikt-schesch/LLMerge | https://github.com/benedikt-schesch/LLMerge | MIT License | train.py | https://github.com/benedikt-schesch/LLMerge/blob/7c02aa9804f76d70f990576febd5eefb3641508d/train.py | 2025-03-24T10:22:45.642712 | [
{
"name": "format_reward (from list item 0)",
"code": "def format_reward(completions: List[List[Dict[str, str]]], log_wandb: bool=True, **kwargs) -> List[float]:\n \"\"\"\n Reward = 0.5 if the completion matches the 'thinking' pattern.\n Otherwise 0.0.\n \"\"\"\n rewards = [0.5 if THINKING_RE... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward, merged_conflict_reward]",
"args": "training_args",
"train_dataset": "dataset['train']",
"eval_dataset": null,
"peft_config": null,
"reward_processing_clas... |
shreyshahi/r1-zero-test | https://github.com/shreyshahi/r1-zero-test | MIT License | train.py | https://github.com/shreyshahi/r1-zero-test/blob/1f6bf399346f263a6f0f7ac0e3645cfc6c448c99/train.py | 2025-03-24T10:23:10.654699 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Reward function that checks if the completion has a specific format.\"\"\"\n pattern = '<think>\\\\n[\\\\s\\\\S]*?</think>\\\\n<answer>\\\\n[\\\\s\\\\S]*?</answer>'\n re... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_c... |
matthewlee626/concise-cognition-submission-final | https://github.com/matthewlee626/concise-cognition-submission-final | Unknown | concision/train/train.py | https://github.com/matthewlee626/concise-cognition-submission-final/blob/243fccc3d7738af248b7fb642f4ac355eb8dca7d/concision/train/train.py | 2025-03-24T10:23:19.642883 | [
{
"name": "Potential lambda reward: lambda_0",
"code": "lambda: [format_reward, accuracy_reward, tag_count_reward, expression_based_accuracy_reward, soft_format_reward]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
}
] | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "reward_funcs",
"args": "training_args",
"train_dataset": "dataset['train']",
"eval_dataset": null,
"peft_config": "peft_config",
"reward_processin... |
cttmayi/AIDemo | https://github.com/cttmayi/AIDemo | Unknown | fine-tune/deepseek_r1/v1.py | https://github.com/cttmayi/AIDemo/blob/6d98e58035e5478966a1a76241c680210c6f942c/fine-tune/deepseek_r1/v1.py | 2025-03-24T10:23:26.413631 | [
{
"name": "reward_len (from list item 0)",
"code": "def reward_len(completions, **kwargs):\n return [abs(20 - len(completion)) for completion in completions]",
"label": "{\"label\": \"LENGTH_BASED\"}"
},
{
"name": "reward_len (from [reward_len])",
"code": "def reward_len(completions, **kw... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[reward_len]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"processing_class"... |
guanyilun/scratch | https://github.com/guanyilun/scratch | Unknown | symbolic/attempt5/train_grpo_game24.py | https://github.com/guanyilun/scratch/blob/041ffce2012128f4f3eef2c8ff0cebc1be3bc386/symbolic/attempt5/train_grpo_game24.py | 2025-03-24T10:23:28.762297 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Reward function that checks if the completion has the correct format.\"\"\"\n pattern = '^<reasoning>(?:(?!</reasoning>).)*</reasoning>\\\\n<answer>(?:(?!</answer>).)*</ans... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[format_reward_func, expression_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"r... |
Legionof7/GRPOdx | https://github.com/Legionof7/GRPOdx | Unknown | medical_grpo.py | https://github.com/Legionof7/GRPOdx/blob/b039810b169606493a1e3202dbf1b4d9cec02942/medical_grpo.py | 2025-03-24T10:23:35.514456 | [
{
"name": "doctor_game_reward (from list item 0)",
"code": "def doctor_game_reward(prompts, completions, **kwargs) -> list[float]:\n \"\"\"Stub that always returns 0, the real reward is from multi_turn_generation.\"\"\"\n return [0.0] * len(prompts)",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},... | [
{
"trainer_type": "DoctorGRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[doctor_game_reward]",
"args": "config",
"train_dataset": "train_dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"proc... |
minwukim/VRL | https://github.com/minwukim/VRL | Unknown | train_verification_demo.py | https://github.com/minwukim/VRL/blob/e6aa1718019db1acce0b2c49994827bb88be7e4d/train_verification_demo.py | 2025-03-24T10:23:44.547449 | [
{
"name": "reward_correct (from list item 0)",
"code": "def reward_correct(completions, answer, **kwargs):\n correct = [1.0 if verify(parse(c), parse(gt)) else 0.0 for c, gt in zip(completions, answer)]\n return correct",
"label": "{\"label\": \"ANSWER_CORRECTNESS\"}"
},
{
"name": "reward_... | [
{
"trainer_type": "VerificationGRPOTrainer",
"args": [],
"kwargs": {
"model": "model_name",
"reward_funcs": "[reward_correct, reward_correct_and_format]",
"args": "training_args",
"train_dataset": "train",
"eval_dataset": "test",
"peft_config": null,
"reward_pro... |
debayan/grpo-experiments | https://github.com/debayan/grpo-experiments | Unknown | train_sparql_grpo.py | https://github.com/debayan/grpo-experiments/blob/55535125ed3521e557b00ede2d087b0686d114b3/train_sparql_grpo.py | 2025-03-24T10:23:46.803114 | [
{
"name": "reward_func",
"code": "def reward_func(completions, ground_truth, **kwargs):\n rewards = []\n for c, g in zip(completions, ground_truth):\n extracted_c = c.replace(' ', '').lower()\n g = g.replace(' ', '').lower()\n similarity_reward = SequenceMatcher(None, extracted_c,... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'./Qwen2-1.5B-GRPO/checkpoint-5250/'",
"reward_funcs": "reward_func",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": ... |
aphil311/talos | https://github.com/aphil311/talos | MIT License | rl-cai/train_grpo.py | https://github.com/aphil311/talos/blob/1f93b7e6e0ad7895c079be9ae44267363ac7357c/rl-cai/train_grpo.py | 2025-03-24T10:24:02.574152 | [
{
"name": "reward_len",
"code": "def reward_len(completions: list[str], **kwargs) -> list[float]:\n \"\"\"\n Reward function that rewards completions that align closest to the constitution.\n\n Args:\n completions (list[str]): The completions to reward\n\n Returns:\n list[float]: T... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "args.model",
"reward_funcs": "reward_len",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
"processing_cla... |
joykirat18/thinkingModelProbing | https://github.com/joykirat18/thinkingModelProbing | Unknown | grpo.py | https://github.com/joykirat18/thinkingModelProbing/blob/9bbb93ee9533770b68cabc38c5f4912e9dd6dde3/grpo.py | 2025-03-24T10:24:07.061347 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
The-Last-Byte-Bar/SharkNet | https://github.com/The-Last-Byte-Bar/SharkNet | MIT License | pipeline/grpo_trainer.py | https://github.com/The-Last-Byte-Bar/SharkNet/blob/412b86899088b9ff5d5c47db45ce283779c86021/pipeline/grpo_trainer.py | 2025-03-24T10:24:20.620154 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n \"\"\"Reward function for XML tag formatting.\"\"\"\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": null,... |
htesd/fucking_drug | https://github.com/htesd/fucking_drug | Apache License 2.0 | llms/grpo_demo.py | https://github.com/htesd/fucking_drug/blob/6924b94bf04c77f632f52d438f1a9a579a4b01e1/llms/grpo_demo.py | 2025-03-24T10:24:27.339038 | [
{
"name": "mark_reward (from list item 0)",
"code": "def mark_reward(completions, **kwargs):\n responses = [completion[0]['content'] for completion in completions]\n return [mark_num(response) for response in responses]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"name": "soft_format_... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[mark_reward, soft_format_reward, hard_format_reward, digit_reward, correctness_reward]",
"args": "training_args",
"train_dataset": "data",
"eval_dataset": null,
"peft_config":... |
flashsonic6666/trl | https://github.com/flashsonic6666/trl | Apache License 2.0 | tests/test_grpo_trainer.py | https://github.com/flashsonic6666/trl/blob/b1c825c7b9971d27f155913eb2dca437993fedd7/tests/test_grpo_trainer.py | 2025-03-24T10:24:29.610177 | [
{
"name": "reward_func",
"code": "def reward_func(completions, some_values, **kwargs):\n \"\"\"Reward function that rewards completions with lengths closer to the values in some_values.\"\"\"\n return [float(abs(len(completion) - value)) for completion, value in zip(completions, some_values)]",
"l... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'trl-internal-testing/tiny-Qwen2ForCausalLM-2.5'",
"reward_funcs": "'trl-internal-testing/tiny-Qwen2ForSequenceClassification-2.5'",
"args": null,
"train_dataset": "dataset",
"eval_dataset": null,
"pef... |
zhuopanyang/algorithm_learning | https://github.com/zhuopanyang/algorithm_learning | Unknown | github_model_learning/unlock-deepseek-main/Datawhale-R1/train_Datawhale-R1.py | https://github.com/zhuopanyang/algorithm_learning/blob/1d124a74926e8159b696b5fff18fb7f4b1d9bb80/github_model_learning/unlock-deepseek-main/Datawhale-R1/train_Datawhale-R1.py | 2025-03-24T10:24:43.067362 | [
{
"name": "format_reward_func (from list item 0)",
"code": "def format_reward_func(completions, **kwargs):\n \"\"\"\n 格式奖励函数,检查模型输出格式是否匹配: <think>...</think><answer>...</answer>\n\n 参数:\n completions (list[str]): 生成的输出\n 返回:\n list[float]: 奖励分数\n \"\"\"\n rewards = []\n fo... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[format_reward_func, equation_reward_func]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset",
"peft_config": n... |
shibing624/MedicalGPT | https://github.com/shibing624/MedicalGPT | Apache License 2.0 | grpo_training.py | https://github.com/shibing624/MedicalGPT/blob/a3e0d34f491b430dece391b8f22ba06755b55a8b/grpo_training.py | 2025-03-24T10:25:05.968107 | [
{
"name": "accuracy_reward (from list item 0)",
"code": "def accuracy_reward(completions, solution, **kwargs):\n \"\"\"Reward function that checks if the completion is the same as the ground truth.\"\"\"\n contents = [completion[0]['content'] for completion in completions]\n rewards = []\n for c... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model_args.model_name_or_path",
"reward_funcs": "[accuracy_reward, format_reward]",
"args": "training_args",
"train_dataset": "train_dataset",
"eval_dataset": "test_dataset if training_args.eval_strategy != ... |
chunhuizhang/llm_rl | https://github.com/chunhuizhang/llm_rl | Unknown | tutorials/RL4Agents/scripts/tool_grpo.py | https://github.com/chunhuizhang/llm_rl/blob/1c2b9baff36b219076b07f5aeeb4f748d7461388/tutorials/RL4Agents/scripts/tool_grpo.py | 2025-03-24T10:25:12.701632 | [
{
"name": "agent_reward",
"code": "def agent_reward(completions, **kwargs):\n rewards = []\n for completion in completions:\n content = completion[0]['content']\n match = re.search('<tool_call>\\\\s*(\\\\{.*?\\\\})\\\\s*</tool_call>', content, re.DOTALL)\n if not match:\n ... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'Qwen/Qwen2.5-0.5B-Instruct'",
"reward_funcs": "agent_reward",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_processing_classes": null,
... |
s-smits/grpo-optuna | https://github.com/s-smits/grpo-optuna | Unknown | main.py | https://github.com/s-smits/grpo-optuna/blob/39d5e5009e54c0b896b6ad2fedd3ed81b6342a6c/main.py | 2025-03-24T10:25:17.269538 | [
{
"name": "weighted_xmlcount_reward_func (from list item 0)",
"code": "def weighted_xmlcount_reward_func(completions, **kwargs):\n return [x * xmlcount_weight for x in xmlcount_reward_func(completions, xml_count_reward=xml_count_reward, **kwargs)]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[weighted_xmlcount_reward_func, weighted_soft_format_reward_func, weighted_strict_format_reward_func, weighted_int_reward_func, weighted_correctness_reward_func]",
"args": "training_args",
... |
YeonwooSung/ai_book | https://github.com/YeonwooSung/ai_book | Unknown | LLMs/training/train_grpo.py | https://github.com/YeonwooSung/ai_book/blob/9e8bce3b74cd5f329aff01bc8f95ae2fae983085/LLMs/training/train_grpo.py | 2025-03-24T10:25:21.808376 | [
{
"name": "xmlcount_reward_func (from list item 0)",
"code": "def xmlcount_reward_func(completions, **kwargs) -> list[float]:\n contents = [completion[0]['content'] for completion in completions]\n return [count_xml(c) for c in contents]",
"label": "{\"label\": \"COMPUTATIONAL\"}"
},
{
"na... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "model",
"reward_funcs": "[xmlcount_reward_func, soft_format_reward_func, strict_format_reward_func, int_reward_func, correctness_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_data... |
Eric-is-good/pretrain-LLM-from-scratch | https://github.com/Eric-is-good/pretrain-LLM-from-scratch | Unknown | train/grpo_train.py | https://github.com/Eric-is-good/pretrain-LLM-from-scratch/blob/5e043777ff0d1397ca54ae3baa4789b0118c1851/train/grpo_train.py | 2025-03-24T10:25:24.072706 | [
{
"name": "length_reward_func (from list item 0)",
"code": "def length_reward_func(completions, **kwargs):\n score = [float(len(completion[0]['content'])) * 0.002 for completion in completions]\n return score",
"label": "{\"label\": \"LENGTH_BASED\"}"
},
{
"name": "format_reward_func (from... | [
{
"trainer_type": "GRPOTrainer",
"args": [],
"kwargs": {
"model": "'model/'",
"reward_funcs": "[length_reward_func, format_reward_func, answer_reward_func]",
"args": "training_args",
"train_dataset": "dataset",
"eval_dataset": null,
"peft_config": null,
"reward_... |
Subsets and Splits
Label Distribution in Rewards
Provides a count of occurrences for each label across all reward functions in the dataset, helping to understand the distribution of different label types.