--- license: apache-2.0 base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B tags: - reinforcement-learning - grpo - peft - lora - beam-mechanics - structural-engineering - math - reasoning language: - en pipeline_tag: text-generation datasets: - lamm-mit/BeamRL-TrainData - lamm-mit/BeamRL-EvalData --- # BeamPERL — DeepSeek-R1-Distill-Qwen-1.5B **BeamPERL** is a parameter-efficient, reinforcement-learning fine-tuned language model specialized in beam mechanics problem-solving. It is built on top of [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) using LoRA adapters trained with Group Relative Policy Optimization (GRPO) and verifiable reward signals. ## Model Details | Property | Value | |---|---| | Base model | `tphage/DeepSeek-R1-Distill-Qwen-1.5B` | | Fine-tuning method | GRPO (RL) + LoRA (PEFT) | | LoRA rank / alpha | 32 / 128 | | LoRA dropout | 0.05 | | LoRA target modules | q, k, v, o, gate, up, down projections | | Training precision | bfloat16 | | Max sequence length | 2048 tokens (256 prompt + 1792 completion) | | Training dataset | `tphage/BeamRL-TrainData` (synthetic beam mechanics QA) | ### Reward Functions | Reward | Weight | Description | |---|---|---| | Accuracy | 0.667 | Correctness of predicted reaction forces / coefficients | | Format | 0.333 | Requires reasoning in `` tags and answer in `\boxed{}` | ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("tphage/BeamPERL", torch_dtype="bfloat16", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("tphage/BeamPERL") prompt = "Determine the reaction forces at the pin support (x=0.0*L) and the roller support (x=9.0*L) for a statically loaded beam with a length of 9*L, a point load of -13*P at x=3.0*L, and supports at x=0.0*L (pin) and x=9.0*L (roller)." messages = [{"role": "user", "content": prompt}] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=1792, temperature=0.6) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` The model reasons step-by-step inside `...` tags and gives its final answer in `\boxed{...}` format. ## Training LoRA adapters were trained using GRPO via the [BeamPERL framework](https://github.com/tphage/BeamPERL) on a synthetic dataset of beam mechanics questions generated with the SymBeam library. The base model weights were kept frozen throughout training. ## Citation ```bibtex @misc{hage2026beamperlparameterefficientrlverifiable, title={BeamPERL: Parameter-Efficient RL with Verifiable Rewards Specializes Compact LLMs for Structured Beam Mechanics Reasoning}, author={Tarjei Paule Hage and Markus J. Buehler}, year={2026}, eprint={2603.04124}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2603.04124}, ``` ## Acknowledgements Built upon [Tina](https://arxiv.org/abs/2504.15777) and [Open R1](https://github.com/huggingface/open-r1). Dataset generation uses a custom version of [SymBeam](https://github.com/amcc1996/symbeam).