| | ---
|
| | license: gpl-3.0
|
| | pipeline_tag: robotics
|
| | ---
|
| |
|
| |
|
| | 
|
| |
|
| | ---
|
| |
|
| | # Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
|
| |
|
| | > **Repository address: [https://github.com/Skylark0924/Rofunc](https://github.com/Skylark0924/Rofunc)** <br>
|
| | > **Documentation: [https://rofunc.readthedocs.io/](https://rofunc.readthedocs.io/)**
|
| |
|
| |
|
| | <style>
|
| | /* 通用表格样式 */
|
| | table {
|
| | width: 100%;
|
| | border-collapse: collapse;
|
| | table-layout: fixed;
|
| | margin-bottom: 20px;
|
| | }
|
| |
|
| | td {
|
| | padding: 0;
|
| | overflow: hidden;
|
| | vertical-align: middle;
|
| | }
|
| |
|
| | img {
|
| | width: 100%;
|
| | display: block;
|
| | object-fit: contain;
|
| | }
|
| |
|
| | /* 第一个表格样式 - 每行4列 */
|
| | .table-four-cols tr {
|
| | height: 180px;
|
| | }
|
| |
|
| | .table-four-cols td {
|
| | width: 25%;
|
| | }
|
| |
|
| | /* 第二个表格样式 - 每行3列 */
|
| | .table-three-cols tr {
|
| | height: 180px;
|
| | }
|
| |
|
| | .table-three-cols td {
|
| | width: 33.33%;
|
| | }
|
| |
|
| | /* 隐藏第二个表格的空单元格 */
|
| | .table-three-cols td:empty {
|
| | display: none;
|
| | }
|
| | </style>
|
| |
|
| | <!-- 第一个表格 - 4列布局 -->
|
| | <table class="table-four-cols">
|
| | <tr>
|
| | <td><img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspSpatulaRofuncRLPPO.gif"></td>
|
| | <td><img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspPower_drillRofuncRLPPO.gif"></td>
|
| | <td><img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspPhillips_Screw_DriverRofuncRLPPO.gif"></td>
|
| | <td><img src="doc/img/task_gif/CURIQbSoftHandSynergyGraspLarge_clampRofuncRLPPO.gif"></td>
|
| | </tr>
|
| | <tr>
|
| | <td><img src="doc/img/task_gif/UDH_Random_Motion.gif"></td>
|
| | <td><img src="doc/img/task_gif/H1_Random_Motion.gif"></td>
|
| | <td><img src="doc/img/task_gif/Bruce_Random_Motion.gif"></td>
|
| | <td><img src="doc/img/task_gif/Walker_Random_Motion.gif"></td>
|
| | </tr>
|
| | </table>
|
| |
|
| | <!-- 第二个表格 - 3列布局 -->
|
| | <table class="table-three-cols">
|
| | <tr>
|
| | <td><img src="doc/img/task_gif/CURICoffeeStirring.gif"></td>
|
| | <td><img src="doc/img/task_gif/CURIScrew.gif"></td>
|
| | <td><img src="doc/img/task_gif/CURITaichiPushingHand.gif"></td>
|
| | </tr>
|
| | <tr>
|
| | <td><img src="doc/img/task_gif/HumanoidFlipRofuncRLAMP.gif"></td>
|
| | <td><img src="doc/img/task_gif/HumanoidDanceRofuncRLAMP.gif"></td>
|
| | <td><img src="doc/img/task_gif/HumanoidRunRofuncRLAMP.gif"></td>
|
| | </tr>
|
| | <tr>
|
| | <td><img src="doc/img/task_gif/HumanoidASEHeadingSwordShieldRofuncRLASE.gif"></td>
|
| | <td><img src="doc/img/task_gif/HumanoidASEStrikeSwordShieldRofuncRLASE.gif"></td>
|
| | <td><img src="doc/img/task_gif/HumanoidASELocationSwordShieldRofuncRLASE.gif"></td>
|
| | </tr>
|
| | <tr>
|
| | <td><img src="doc/img/task_gif/BiShadowHandLiftUnderarmRofuncRLPPO.gif"></td>
|
| | <td><img src="doc/img/task_gif/BiShadowHandDoorOpenOutwardRofuncRLPPO.gif"></td>
|
| | <td><img src="doc/img/task_gif/BiShadowHandSwingCupRofuncRLPPO.gif"></td>
|
| | </tr>
|
| | </table>
|
| |
|
| | Rofunc package focuses on the **Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD)** for **(Humanoid) Robot Manipulation**. It provides valuable and convenient python functions, including
|
| | _demonstration collection, data pre-processing, LfD algorithms, planning, and control methods_. We also provide an
|
| | `IsaacGym` and `OmniIsaacGym` based robot simulator for evaluation. This package aims to advance the field by building a full-process
|
| | toolkit and validation platform that simplifies and standardizes the process of demonstration data collection,
|
| | processing, learning, and its deployment on robots.
|
| |
|
| | ## Citation
|
| |
|
| | If you use rofunc in a scientific publication, we would appreciate citations to the following paper:
|
| |
|
| | ```
|
| | @software{liu2023rofunc,
|
| | title = {Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation},
|
| | author = {Liu, Junjia and Dong, Zhipeng and Li, Chenzui and Li, Zhihao and Yu, Minghao and Delehelle, Donatien and Chen, Fei},
|
| | year = {2023},
|
| | publisher = {Zenodo},
|
| | doi = {10.5281/zenodo.10016946},
|
| | url = {https://doi.org/10.5281/zenodo.10016946},
|
| | dimensions = {true},
|
| | google_scholar_id = {0EnyYjriUFMC},
|
| | }
|
| | ```
|
| | > [!WARNING]
|
| | > **If our code is found to be used in a published paper without proper citation, we reserve the right to address this issue formally by contacting the editor to report potential academic misconduct!**
|
| | >
|
| | > **如果我们的代码被发现用于已发表的论文而没有被恰当引用,我们保留通过正式联系编辑报告潜在学术不端行为的权利。**
|
| |
|