vecId stringlengths 12 23 | id stringlengths 2 13 | conference stringclasses 11
values | year float64 2.02k 2.03k | title stringlengths 6 189 | abstract stringlengths 10 4.74k | author stringlengths 0 7.45k | aff stringlengths 0 7.16k | status stringclasses 11
values | track stringclasses 4
values | keywords stringlengths 0 804 | github stringlengths 0 141 | site stringlengths 0 193 | gsCitation float64 -1 11.1k | arxiv stringlengths 0 12 | text stringlengths 58 4.82k | vector list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
corl_2024_L4p6zTlj6k | L4p6zTlj6k | corl | 2,024 | TidyBot++: An Open-Source Holonomic Mobile Manipulator for Robot Learning | Exploiting the promise of recent advances in imitation learning for mobile manipulation will require the collection of large numbers of human-guided demonstrations. This paper proposes an open-source design for an inexpensive, robust, and flexible mobile manipulator that can support arbitrary arms, enabling a wide rang... | Jimmy Wu;William Chong;Robert Holmberg;Aaditya Prasad;Yihuai Gao;Oussama Khatib;Shuran Song;Szymon Rusinkiewicz;Jeannette Bohg | Princeton University;;;Stanford University;Stanford University;Stanford University;Stanford University;Princeton University;Stanford University | Poster | main | mobile manipulation;imitation learning;holonomic drive | https://github.com/jimmyyhwu/tidybot2 | https://openreview.net/forum?id=L4p6zTlj6k | 6 | TidyBot++: An Open-Source Holonomic Mobile Manipulator for Robot Learning
Exploiting the promise of recent advances in imitation learning for mobile manipulation will require the collection of large numbers of human-guided demonstrations. This paper proposes an open-source design for an inexpensive, robust, and flexibl... | [
-0.03869860619306564,
-0.006958054378628731,
-0.05067227780818939,
0.03006555140018463,
-0.00011605065810726956,
-0.05375014990568161,
0.0027963591273874044,
0.0012773636262863874,
0.0423770397901535,
0.031679559499025345,
-0.04361569508910179,
0.012095660902559757,
-0.012395940721035004,
... | |
corl_2024_LZh48DTg71 | LZh48DTg71 | corl | 2,024 | Evaluating Real-World Robot Manipulation Policies in Simulation | The field of robotics has made significant advances towards generalist robot manipulation policies. However, real-world evaluation of such policies is not scalable and faces reproducibility challenges, issues that are likely to worsen as policies broaden the spectrum of tasks they can perform. In this work, we demonstr... | Xuanlin Li;Kyle Hsu;Jiayuan Gu;Oier Mees;Karl Pertsch;Homer Rich Walke;Chuyuan Fu;Ishikaa Lunawat;Isabel Sieh;Sean Kirmani;Sergey Levine;Jiajun Wu;Chelsea Finn;Hao Su;Quan Vuong;Ted Xiao | University of California, San Diego;Stanford University;University of California, San Diego;Electrical Engineering & Computer Science Department, University of California, Berkeley;Stanford University;University of California, Berkeley;Google;;Stanford University;Google DeepMind;Google;Stanford University;Google;Univer... | Poster | main | real-to-sim;policy evaluation;robot manipulation | https://github.com/simpler-env/SimplerEnv | https://openreview.net/forum?id=LZh48DTg71 | 67 | Evaluating Real-World Robot Manipulation Policies in Simulation
The field of robotics has made significant advances towards generalist robot manipulation policies. However, real-world evaluation of such policies is not scalable and faces reproducibility challenges, issues that are likely to worsen as policies broaden t... | [
-0.09616430103778839,
-0.012197310104966164,
-0.00015714735491201282,
0.018105236813426018,
-0.033772870898246765,
0.012876489199697971,
-0.02275714837014675,
0.03023741953074932,
0.027483489364385605,
0.040341369807720184,
-0.019854355603456497,
0.028618555516004562,
0.017221374437212944,
... | |
corl_2024_LiwdXkMsDv | LiwdXkMsDv | corl | 2,024 | Uncertainty-Aware Decision Transformer for Stochastic Driving Environments | Offline Reinforcement Learning (RL) enables policy learning without active interactions, making it especially appealing for self-driving tasks. Recent successes of Transformers inspire casting offline RL as sequence modeling, which, however, fails in stochastic environments with incorrect assumptions that identical act... | Zenan Li;Fan Nie;Qiao Sun;Fang Da;Hang Zhao | ;;;QCraft Inc;Tsinghua University | Poster | main | Self-Driving;Decision Transformer;Uncertainty-Aware Planning | https://github.com/Emiyalzn/CoRL24-UNREST | https://openreview.net/forum?id=LiwdXkMsDv | 5 | Uncertainty-Aware Decision Transformer for Stochastic Driving Environments
Offline Reinforcement Learning (RL) enables policy learning without active interactions, making it especially appealing for self-driving tasks. Recent successes of Transformers inspire casting offline RL as sequence modeling, which, however, fai... | [
-0.046867676079273224,
0.012780397199094296,
-0.020653868094086647,
0.02608320489525795,
-0.038434479385614395,
0.010364249348640442,
0.021250909194350243,
0.03996439650654793,
0.03227749839425087,
0.0166145171970129,
-0.022687537595629692,
0.01598948985338211,
0.01430098433047533,
0.02067... | |
corl_2024_Lixj7WEGEy | Lixj7WEGEy | corl | 2,024 | MBC: Multi-Brain Collaborative Control for Quadruped Robots | In the field of locomotion task of quadruped robots, Blind Policy and Perceptive Policy each have their own advantages and limitations. The Blind Policy relies on preset sensor information and algorithms, suitable for known and structured environments, but it lacks adaptability in complex or unknown environments. The P... | Hang Liu;Yi Cheng;Rankun Li;Xiaowen Hu;Linqi Ye;Houde Liu | University of Michigan - Ann Arbor;Tsinghua University;Shanghai University;Shanghai University;Tsinghua University;Shanghai University | Poster | main | Quadruped Robots;Perception Fails;Multi-Brain Collaborative | https://openreview.net/forum?id=Lixj7WEGEy | 0 | MBC: Multi-Brain Collaborative Control for Quadruped Robots
In the field of locomotion task of quadruped robots, Blind Policy and Perceptive Policy each have their own advantages and limitations. The Blind Policy relies on preset sensor information and algorithms, suitable for known and structured environments, but it ... | [
-0.0474429652094841,
-0.03604409098625183,
-0.017089074477553368,
-0.005736386403441429,
-0.012821424752473831,
-0.05257892608642578,
0.010632175020873547,
-0.0014063846319913864,
0.06344203650951385,
0.016571784391999245,
-0.041863612830638885,
-0.03271865099668503,
0.04770161211490631,
0... | ||
corl_2024_LmOF7UAOZ7 | LmOF7UAOZ7 | corl | 2,024 | A Planar-Symmetric SO(3) Representation for Learning Grasp Detection | Planar-symmetric hands, such as parallel grippers, are widely adopted in both research and industrial fields.
Their symmetry, however, introduces ambiguity and discontinuity in the SO(3) representation, which hinders both the training and inference of neural network-based grasp detectors.
We propose a novel SO(3) repre... | Tianyi Ko;Takuya Ikeda;Hiroya Sato;Koichi Nishiwaki | Woven by Toyota, Inc.;Woven by Toyota, Inc.;The University of Tokyo, Tokyo University;Woven by Toyota | Poster | main | Grasp Detection;Rotation Representation;Parallel Gripper | https://openreview.net/forum?id=LmOF7UAOZ7 | 1 | A Planar-Symmetric SO(3) Representation for Learning Grasp Detection
Planar-symmetric hands, such as parallel grippers, are widely adopted in both research and industrial fields.
Their symmetry, however, introduces ambiguity and discontinuity in the SO(3) representation, which hinders both the training and inference of... | [
-0.058036766946315765,
-0.02613135054707527,
-0.04367562755942345,
-0.034792449325323105,
-0.019617019221186638,
-0.018913768231868744,
0.021356642246246338,
0.02909241057932377,
0.022430025041103363,
0.028537211939692497,
-0.025835243985056877,
0.02600180357694626,
0.030110273510217667,
-... | ||
corl_2024_M0Gv07MUMU | M0Gv07MUMU | corl | 2,024 | Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving | The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design. Traditional end-to-end driving models, however, suffer from long-tail events due to rare or unseen inputs within their training distributions. To address this, we propose TOKEN, a ... | Thomas Tian;Boyi Li;Xinshuo Weng;Yuxiao Chen;Edward Schmerling;Yue Wang;Boris Ivanovic;Marco Pavone | University of California, Berkeley;University of California, Berkeley;NVIDIA;NVIDIA;NVIDIA;NVIDIA;Stanford University;California Institute of Technology | Poster | main | Multi-modal LLM;Autonomous Driving;Representation Alignment | https://openreview.net/forum?id=M0Gv07MUMU | 13 | Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving
The autonomous driving industry is increasingly adopting end-to-end learning from sensory inputs to minimize human biases in system design. Traditional end-to-end driving models, however, suffer from long-tail events due to... | [
-0.05704490467905998,
-0.030620234087109566,
0.024639718234539032,
-0.023664435371756554,
-0.04795452579855919,
0.0006733828922733665,
-0.03498140722513199,
0.02049936354160309,
-0.001661892980337143,
0.006518760696053505,
-0.021456245332956314,
0.025891026481986046,
0.01899043284356594,
0... | ||
corl_2024_M0JtsLuhEE | M0JtsLuhEE | corl | 2,024 | T$^2$SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects | Recognizing and manipulating transparent tableware from partial view RGB image observations is made challenging by the difficulty in obtaining reliable depth measurements of transparent objects. In this paper we present the Transparent Tableware SuperQuadric Network (T$^2$SQNet), a neural network model that leverages ... | Young Hun Kim;Seungyeon Kim;Yonghyeon Lee;Frank C. Park | Seoul National University;Seoul National University;Korea Institute for Advanced Study;Seoul National University | Poster | main | Transparent objects;Shape recognition;Object manipulation | https://github.com/seungyeon-k/T2SQNet-public | https://openreview.net/forum?id=M0JtsLuhEE | 0 | T$^2$SQNet: A Recognition Model for Manipulating Partially Observed Transparent Tableware Objects
Recognizing and manipulating transparent tableware from partial view RGB image observations is made challenging by the difficulty in obtaining reliable depth measurements of transparent objects. In this paper we present t... | [
-0.04160411283373833,
0.02715097740292549,
-0.03678639978170395,
0.0098408292979002,
0.008734435774385929,
-0.016497859731316566,
0.03131512179970741,
-0.009588738903403282,
0.015498838387429714,
0.035740695893764496,
0.010970563627779484,
0.012529783882200718,
-0.006050148978829384,
0.012... | |
corl_2024_MfIUKzihC8 | MfIUKzihC8 | corl | 2,024 | CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning | Evaluating autonomous vehicle stacks (AVs) in simulation typically involves replaying driving logs from real-world recorded traffic. However, agents replayed from offline data are not reactive and hard to intuitively control. Existing approaches address these challenges by proposing methods that rely on heuristics or g... | Luke Rowe;Roger Girgis;Anthony Gosselin;Bruno Carrez;Florian Golemo;Felix Heide;Liam Paull;Christopher Pal | Université de Montréal;Mila - Quebec Artificial Intelligence Institute;Montreal Institute for Learning Algorithms, University of Montreal, Université de Montréal;Mila;Mila;Algolux;;Polytechnique Montreal | Poster | main | offline reinforcement learning;autonomous driving;simulation | https://github.com/montrealrobotics/ctrl-sim/ | https://openreview.net/forum?id=MfIUKzihC8 | 6 | CtRL-Sim: Reactive and Controllable Driving Agents with Offline Reinforcement Learning
Evaluating autonomous vehicle stacks (AVs) in simulation typically involves replaying driving logs from real-world recorded traffic. However, agents replayed from offline data are not reactive and hard to intuitively control. Existin... | [
-0.0417555496096611,
-0.029635421931743622,
0.014228802174329758,
0.00981198437511921,
-0.025703029707074165,
-0.009161334484815598,
0.022454531863331795,
0.038602035492658615,
-0.004029752220958471,
0.03503058850765228,
-0.030737251043319702,
-0.014029332436621189,
-0.02539907582104206,
0... | |
corl_2024_MfuzopqVOX | MfuzopqVOX | corl | 2,024 | LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting | Timely capturing the dense geometry of the surrounding scene with unlabeled LiDAR data is valuable but under-explored for mobile robotic applications. Its value lies in the huge amount of such unlabeled data, enabling self-supervised learning for various downstream tasks. Current dynamic 3D scene reconstruction approac... | Chuanyu Pan;Aolin Xu | ; | Poster | main | 3D perception;lidar;opacity grid;occupancy grid;neural rendering;self-supervised learning;mobile robot;autonomous driving | https://openreview.net/forum?id=MfuzopqVOX | 0 | LiDARGrid: Self-supervised 3D Opacity Grid from LiDAR for Scene Forecasting
Timely capturing the dense geometry of the surrounding scene with unlabeled LiDAR data is valuable but under-explored for mobile robotic applications. Its value lies in the huge amount of such unlabeled data, enabling self-supervised learning f... | [
-0.06580168008804321,
0.005559203214943409,
-0.027336999773979187,
-0.042322274297475815,
-0.031454239040613174,
-0.029766542837023735,
0.01818448305130005,
0.013065749779343605,
0.0033985786139965057,
0.041098229587078094,
-0.024313978850841522,
-0.008823322132229805,
0.008211299777030945,
... | ||
corl_2024_MsCbbIqHRA | MsCbbIqHRA | corl | 2,024 | ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter | Robotic grasping in cluttered environments remains a significant challenge due to occlusions and complex object arrangements. We have developed ThinkGrasp, a plug-and-play vision-language grasping system that makes use of GPT-4o's advanced contextual reasoning for grasping strategies. ThinkGrasp can effectively identif... | Yaoyao Qian;Xupeng Zhu;Ondrej Biza;Shuo Jiang;Linfeng Zhao;Haojie Huang;Yu Qi;Robert Platt | Northeastern University;Northeastern University;Northeastern University;;Meta;Northeastern University;Northeastern University;Northeastern University | Poster | main | Robotic Grasping;Vision-Language Models;Language Conditioned Grasping | https://github.com/H-Freax/ThinkGrasp | https://openreview.net/forum?id=MsCbbIqHRA | 14 | ThinkGrasp: A Vision-Language System for Strategic Part Grasping in Clutter
Robotic grasping in cluttered environments remains a significant challenge due to occlusions and complex object arrangements. We have developed ThinkGrasp, a plug-and-play vision-language grasping system that makes use of GPT-4o's advanced cont... | [
-0.03967937454581261,
-0.017566390335559845,
0.023371754214167595,
0.016843067482113838,
-0.009741362184286118,
-0.02816258743405342,
-0.0010356655111536384,
0.0010667823953554034,
0.012117991223931313,
0.025851713493466377,
-0.005091434810310602,
-0.021756021305918694,
-0.007820331491529942... | |
corl_2024_MwZJ96Okl3 | MwZJ96Okl3 | corl | 2,024 | Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance | Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers' situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts.
Moreover, collecting the data to train such an SA model is challenging:
bein... | Abhijat Biswas;Pranay Gupta;Shreeya Khurana;David Held;Henny Admoni | Carnegie Mellon University;Carnegie Mellon University;;Carnegie Mellon University;Carnegie Mellon University | Poster | main | driver awareness;driving assistance;situational awareness | https://openreview.net/forum?id=MwZJ96Okl3 | 1 | Modeling Drivers’ Situational Awareness from Eye Gaze for Driving Assistance
Intelligent driving assistance can alert drivers to objects in their environment; however, such systems require a model of drivers' situational awareness (SA) (what aspects of the scene they are already aware of) to avoid unnecessary alerts.
... | [
-0.05852394551038742,
-0.049698006361722946,
0.007448072079569101,
-0.008215133100748062,
-0.03683789074420929,
-0.026136908680200577,
0.00857498962432146,
0.025170980021357536,
0.011449102312326431,
0.03786063939332962,
-0.016240868717432022,
-0.015303349122405052,
0.040909942239522934,
0... | ||
corl_2024_MyyZZAPgpy | MyyZZAPgpy | corl | 2,024 | SHADOW: Leveraging Segmentation Masks for Cross-Embodiment Policy Transfer | Data collection in robotics is spread across diverse hardware, and this variation will increase as new hardware is developed. Effective use of this growing body of data requires methods capable of learning from diverse robot embodiments. We consider the setting of training a policy using expert trajectories from a sing... | Marion Lepert;Ria Doshi;Jeannette Bohg | Stanford University;University of California, Berkeley;Stanford University | Poster | main | Cross-embodiment learning;Imitation Learning;Manipulation | https://openreview.net/forum?id=MyyZZAPgpy | 4 | SHADOW: Leveraging Segmentation Masks for Cross-Embodiment Policy Transfer
Data collection in robotics is spread across diverse hardware, and this variation will increase as new hardware is developed. Effective use of this growing body of data requires methods capable of learning from diverse robot embodiments. We cons... | [
-0.03264060616493225,
-0.05121103301644325,
-0.0519457682967186,
0.005965130403637886,
-0.02577083371579647,
-0.046361781656742096,
0.020921580493450165,
0.02960982359945774,
0.03284265846014023,
0.003687451593577862,
-0.021197106689214706,
-0.007517258170992136,
-0.007967283949255943,
0.0... | ||
corl_2024_N1K4B8N3n1 | N1K4B8N3n1 | corl | 2,024 | Scaling Safe Multi-Agent Control for Signal Temporal Logic Specifications | Existing methods for safe multi-agent control using logic specifications like Signal Temporal Logic (STL) often face scalability issues. This is because they rely either on single-agent perspectives or on Mixed Integer Linear Programming (MILP)-based planners, which are complex to optimize. These methods have proven to... | Joe Eappen;Zikang Xiong;Dipam Patel;Aniket Bera;Suresh Jagannathan | Purdue University;Purdue University;Purdue University;University of Maryland, College Park; | Poster | main | Multi-Robot Systems;Path Planning for Multiple Mobile Robots or Agents;Collision Avoidance;Hybrid Logical/Dynamical Planning and Verification;Deep Learning Methods | https://github.com/jeappen/mastl-gcbf | https://openreview.net/forum?id=N1K4B8N3n1 | 1 | Scaling Safe Multi-Agent Control for Signal Temporal Logic Specifications
Existing methods for safe multi-agent control using logic specifications like Signal Temporal Logic (STL) often face scalability issues. This is because they rely either on single-agent perspectives or on Mixed Integer Linear Programming (MILP)-b... | [
-0.05630621314048767,
-0.027764659374952316,
-0.01858992874622345,
0.006857926491647959,
-0.011856859549880028,
0.011995591223239899,
-0.01923733949661255,
-0.008046386763453484,
0.0072602457366883755,
0.04254411533474922,
-0.02870802953839302,
0.006881048437207937,
-0.004261347930878401,
... | |
corl_2024_N5IS6DzBmL | N5IS6DzBmL | corl | 2,024 | Play to the Score: Stage-Guided Dynamic Multi-Sensory Fusion for Robotic Manipulation | Humans possess a remarkable talent for flexibly alternating to different senses when interacting with the environment. Picture a chef skillfully gauging the timing of ingredient additions and controlling the heat according to the colors, sounds, and aromas, seamlessly navigating through every stage of the complex cooki... | Ruoxuan Feng;Di Hu;Wenke Ma;Xuelong Li | Renmin University of China;Renmin University of China;Northwestern Polytechnical University;Department of Computer Science, University of Massachusetts at Amherst | Poster | main | Multi-Sensory;Robotic Manipulation;Multi-Stage | https://github.com/GeWu-Lab/MS-Bot | https://openreview.net/forum?id=N5IS6DzBmL | 7 | Play to the Score: Stage-Guided Dynamic Multi-Sensory Fusion for Robotic Manipulation
Humans possess a remarkable talent for flexibly alternating to different senses when interacting with the environment. Picture a chef skillfully gauging the timing of ingredient additions and controlling the heat according to the colo... | [
-0.030132146552205086,
-0.029588378965854645,
-0.04083871468901634,
0.029363373294472694,
-0.0059392391704022884,
-0.005911113228648901,
0.0036844846326857805,
-0.007804919499903917,
0.0167161226272583,
0.025125747546553612,
0.0027375814970582724,
-0.00020859995856881142,
0.00136175926309078... | |
corl_2024_NCnplCf4wo | NCnplCf4wo | corl | 2,024 | Learning a Distributed Hierarchical Locomotion Controller for Embodied Cooperation | In this work, we propose a distributed hierarchical locomotion control strategy for whole-body cooperation and demonstrate the potential for migration into large numbers of agents. Our method utilizes a hierarchical structure to break down complex tasks into smaller, manageable sub-tasks. By incorporating spatiotempora... | Chuye Hong;Kangyao Huang;Huaping Liu | Tsinghua University;Tsinghua University;Tsinghua University | Poster | main | Cooperation;Locomotion;Hierarchical reinforcement learning | https://openreview.net/forum?id=NCnplCf4wo | 3 | Learning a Distributed Hierarchical Locomotion Controller for Embodied Cooperation
In this work, we propose a distributed hierarchical locomotion control strategy for whole-body cooperation and demonstrate the potential for migration into large numbers of agents. Our method utilizes a hierarchical structure to break do... | [
-0.07174701243638992,
-0.03236958384513855,
-0.013255584985017776,
0.03985945135354996,
-0.007031022105365992,
-0.004829482641071081,
0.0041574337519705296,
-0.006145771127194166,
0.04201000928878784,
-0.0013765415642410517,
-0.04490213468670845,
-0.011355308815836906,
0.006929055787622929,
... | ||
corl_2024_NiA8hVdDS7 | NiA8hVdDS7 | corl | 2,024 | RoboKoop: Efficient Control Conditioned Representations from Visual Input in Robotics using Koopman Operator | Developing agents that can perform complex control tasks from high-dimensional observations is a core ability of autonomous agents that requires underlying robust task control policies and adapting the underlying visual representations to the task. Most existing policies need a lot of training samples and treat this pr... | Hemant Kumawat;Biswadeep Chakraborty;Saibal Mukhopadhyay | Georgia Institute of Technology;Georgia Institute of Technology;Georgia Institute of Technology | Poster | main | Feature extraction;Task Feedback;Control | https://openreview.net/forum?id=NiA8hVdDS7 | 3 | RoboKoop: Efficient Control Conditioned Representations from Visual Input in Robotics using Koopman Operator
Developing agents that can perform complex control tasks from high-dimensional observations is a core ability of autonomous agents that requires underlying robust task control policies and adapting the underlyin... | [
-0.026778094470500946,
-0.037179503589868546,
0.005168430507183075,
0.001332449959591031,
-0.013149297796189785,
-0.02345849573612213,
0.027423571795225143,
0.02545025385916233,
0.027792414650321007,
0.035575028508901596,
-0.025800656527280807,
-0.01936432346701622,
0.017381785437464714,
0... | ||
corl_2024_O05tIQt2d5 | O05tIQt2d5 | corl | 2,024 | TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation | Legged navigation is typically examined within open-world, off-road, and challenging environments. In these scenarios, estimating external disturbances requires a complex synthesis of multi-modal information. This underlines a major limitation in existing works that primarily focus on avoiding obstacles. In this work, ... | Junli Ren;Yikai Liu;Yingru Dai;Junfeng Long;Guijin Wang | University of Hong Kong;;;Shanghai AI Laboratory;Department of Electronic Engineering, Tsinghua University | Poster | main | Navigation;Task Planning;Reinforcement Learning | https://openreview.net/forum?id=O05tIQt2d5 | 3 | TOP-Nav: Legged Navigation Integrating Terrain, Obstacle and Proprioception Estimation
Legged navigation is typically examined within open-world, off-road, and challenging environments. In these scenarios, estimating external disturbances requires a complex synthesis of multi-modal information. This underlines a major ... | [
-0.04093024507164955,
-0.046503257006406784,
-0.003517731325700879,
0.02871391549706459,
0.012336280196905136,
-0.002282488625496626,
-0.0011689248494803905,
0.011247513815760612,
-0.004135929513722658,
0.031445059925317764,
-0.006901673506945372,
-0.02360224723815918,
-0.0011418210342526436... | ||
corl_2024_O0oK2bVist | O0oK2bVist | corl | 2,024 | Adapting Humanoid Locomotion over Challenging Terrain via Two-Phase Training | Humanoid robots are a key focus in robotics, with their capacity to navigate tough terrains being essential for many uses. While strides have been made, creating adaptable locomotion for complex environments is still tough. Recent progress in learning-based systems offers hope for robust legged locomotion, but challeng... | Wenhao Cui;Shengtao Li;Huaxing Huang;Bangyu Qin;Tianchu Zhang;hanjinchao;Liang Zheng;Ziyang Tang;Chenxu Hu;NING Yan;Jiahao Chen;Zheyuan Jiang | University of Southern California;North University of China;Noetic Robotics;Shanghai Jiaotong University;Noetix Robotics;Noetic Robotics;University of Electronic Science and Technology of China;State University of New York at Stony Brook;Tsinghua University;;ShanghaiTech University;Institute for Interdisciplinary Infor... | Poster | main | humanoid robots;locomotion;reinforcement learning;curriculum;sim-to-real | https://openreview.net/forum?id=O0oK2bVist | 4 | Adapting Humanoid Locomotion over Challenging Terrain via Two-Phase Training
Humanoid robots are a key focus in robotics, with their capacity to navigate tough terrains being essential for many uses. While strides have been made, creating adaptable locomotion for complex environments is still tough. Recent progress in ... | [
-0.06850792467594147,
-0.017312338575720787,
0.00045702073839493096,
-0.004434665199369192,
-0.018758123740553856,
-0.02963857538998127,
0.024763688445091248,
0.012113076634705067,
-0.01372568216174841,
-0.004640874452888966,
-0.024467118084430695,
-0.00502780731767416,
0.022131619974970818,... | ||
corl_2024_OGjGtN6hoo | OGjGtN6hoo | corl | 2,024 | Adaptive Language-Guided Abstraction from Contrastive Explanations | Many approaches to robot learning begin by inferring a reward function from a set of human demonstrations.
To learn a good reward, it is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward.
In particularly complex, high-dimensional... | Andi Peng;Belinda Z. Li;Ilia Sucholutsky;Nishanth Kumar;Julie Shah;Jacob Andreas;Andreea Bobu | Massachusetts Institute of Technology;Princeton University;The AI Institute;Massachusetts Institute of Technology;Microsoft;The AI Institute;Massachusetts Institute of Technology | Poster | main | reward learning;language-guided abstraction;reward features | https://openreview.net/forum?id=OGjGtN6hoo | 4 | Adaptive Language-Guided Abstraction from Contrastive Explanations
Many approaches to robot learning begin by inferring a reward function from a set of human demonstrations.
To learn a good reward, it is necessary to determine which features of the environment are relevant before determining how these features should b... | [
-0.037849895656108856,
-0.026150496676564217,
0.014245187863707542,
-0.012363924644887447,
-0.018559927120804787,
-0.035079475492239,
0.011596444062888622,
-0.0019350805087015033,
-0.021283546462655067,
-0.009088093414902687,
-0.02425987273454666,
-0.021115075796842575,
-0.022481564432382584... | ||
corl_2024_Oce2215aJE | Oce2215aJE | corl | 2,024 | Body Transformer: Leveraging Robot Embodiment for Policy Learning | In recent years, the transformer architecture has become the de-facto standard for machine learning algorithms applied to natural language processing and computer vision. Despite notable evidence of successful deployment of this architecture in the context of robot learning, we claim that vanilla transformers do not fu... | Carmelo Sferrazza;Dun-Ming Huang;Fangchen Liu;Jongmin Lee;Pieter Abbeel | University of California, Berkeley;University of California, Berkeley;University of California, Berkeley;University of California, Berkeley;Covariant | Poster | main | Robot Learning;Graph Neural Networks;Imitation Learning;Reinforcement Learning | https://github.com/carlosferrazza/BodyTransformer | https://openreview.net/forum?id=Oce2215aJE | 9 | Body Transformer: Leveraging Robot Embodiment for Policy Learning
In recent years, the transformer architecture has become the de-facto standard for machine learning algorithms applied to natural language processing and computer vision. Despite notable evidence of successful deployment of this architecture in the conte... | [
-0.05683477222919464,
-0.03445770591497421,
-0.012424475513398647,
0.026354383677244186,
-0.03378862515091896,
-0.05215119943022728,
-0.014320206828415394,
0.017312489449977875,
0.042375173419713974,
0.006249408703297377,
-0.006319104693830013,
-0.027246493846178055,
0.00880957581102848,
0... | |
corl_2024_OznnnxPLiH | OznnnxPLiH | corl | 2,024 | JointMotion: Joint Self-Supervision for Joint Motion Prediction | We present JointMotion, a self-supervised pre-training method for joint motion prediction in self-driving vehicles. Our method jointly optimizes a scene-level objective connecting motion and environments, and an instance-level objective to refine learned representations. Scene-level representations are learned via non-... | Royden Wagner;Omer Sahin Tas;Marvin Klemp;Carlos Fernandez | Karlsruhe Institute of Technology;FZI Research Center for Information Technology;Karlsruhe Institute of Technology;Karlsruher Institut für Technologie | Poster | main | Self-supervised learning;representation learning;multimodal pre-training;motion prediction;data-efficient learning | https://github.com/kit-mrt/future-motion | https://openreview.net/forum?id=OznnnxPLiH | 2 | JointMotion: Joint Self-Supervision for Joint Motion Prediction
We present JointMotion, a self-supervised pre-training method for joint motion prediction in self-driving vehicles. Our method jointly optimizes a scene-level objective connecting motion and environments, and an instance-level objective to refine learned r... | [
-0.01890438236296177,
-0.036344725638628006,
-0.03671073541045189,
0.01697368361055851,
-0.007997304201126099,
0.01710178703069687,
0.02175925485789776,
0.021795855835080147,
-0.011776350438594818,
0.03338005021214485,
-0.022399771958589554,
-0.03499049320816994,
0.019618099555373192,
0.04... | |
corl_2024_PAtsxVz0ND | PAtsxVz0ND | corl | 2,024 | ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real | This paper tackles the challenging robotic task of generalizable paper cutting using scissors.
In this task, scissors attached to a robot arm are driven to accurately cut curves drawn on the paper, which is hung with the top edge fixed.
Due to the frequent paper-scissor contact and consequent fracture, the paper feat... | Jiangran Lyu;Yuxing Chen;Tao Du;Feng Zhu;Huiquan Liu;Yizhou Wang;He Wang | Peking University;Peking University;Shanghai Qi Zhi Institute;;University of Electronic Science and Technology of China;Peking University;Peking University | Poster | main | Deformable Object Manipulation;Imitation Learning;Sim-to-Real | https://openreview.net/forum?id=PAtsxVz0ND | 5 | ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real
This paper tackles the challenging robotic task of generalizable paper cutting using scissors.
In this task, scissors attached to a robot arm are driven to accurately cut curves drawn on the paper, which is hung ... | [
-0.06815088540315628,
0.017592310905456543,
0.00933710765093565,
-0.004709466360509396,
-0.00716420728713274,
-0.0029911475721746683,
-0.020801657810807228,
-0.0005463503766804934,
0.02309275045990944,
0.02872956171631813,
-0.03720296546816826,
-0.010391737334430218,
0.01247372105717659,
0... | ||
corl_2024_PbQOZntuXO | PbQOZntuXO | corl | 2,024 | One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion | Deep Reinforcement Learning techniques are achieving state-of-the-art results in robust legged locomotion.
While there exists a wide variety of legged platforms such as quadruped, humanoids, and hexapods, the field is still missing a single learning framework that can control all these different embodiments easily and ... | Nico Bohlinger;Grzegorz Czechmanowski;Maciej Piotr Krupka;Piotr Kicki;Krzysztof Walas;Jan Peters;Davide Tateo | Technische Universität Darmstadt;Technical University of Poznan;Technical University of Poznan;IDEAS NCBR Sp.;Technical University of Poznan;TU Darmstadt;Technische Universität Darmstadt | Poster | main | Locomotion;Reinforcement Learning;Multi-embodiment Learning | https://github.com/nico-bohlinger/one_policy_to_run_them_all | https://openreview.net/forum?id=PbQOZntuXO | 14 | One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion
Deep Reinforcement Learning techniques are achieving state-of-the-art results in robust legged locomotion.
While there exists a wide variety of legged platforms such as quadruped, humanoids, and hexapods, the field is still missi... | [
-0.05270666256546974,
-0.015954652801156044,
0.011543660424649715,
0.0023310217075049877,
-0.06318042427301407,
-0.054621219635009766,
-0.00842311792075634,
0.006996584124863148,
0.016649149358272552,
-0.013242361135780811,
-0.018300924450159073,
0.028906075283885002,
0.03230347856879234,
... | |
corl_2024_Q2lGXMZCv8 | Q2lGXMZCv8 | corl | 2,024 | LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning | In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robotics applications have been extensively trained on language and action ... | Dantong Niu;Yuvan Sharma;Giscard Biamby;Jerome Quenum;Yutong Bai;Baifeng Shi;Trevor Darrell;Roei Herzig | University of California, Berkeley;University of California, Berkeley;University of California, Berkeley;Johns Hopkins University;NVIDIA;University of California, Berkeley;University of California, Berkeley;Electrical Engineering & Computer Science Department | Poster | main | LMMs;Vision Action Instruction Tuning;Robot Learning | https://openreview.net/forum?id=Q2lGXMZCv8 | 22 | LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning
In recent years, instruction-tuned Large Multimodal Models (LMMs) have been successful at several tasks, including image captioning and visual question answering; yet leveraging these models remains an open question for robotics. Prior LMMs for robotics a... | [
-0.054400619119405746,
-0.02071704901754856,
0.002362349536269903,
-0.00033087815972976387,
-0.03351827338337898,
0.03320604935288429,
-0.014849054627120495,
0.02709011174738407,
0.006561317015439272,
-0.008861680515110493,
-0.026263633742928505,
-0.002686053514480591,
0.010698298923671246,
... | ||
corl_2024_QUzwHYJ9Hf | QUzwHYJ9Hf | corl | 2,024 | Towards Open-World Grasping with Large Vision-Language Models | The ability to grasp objects in-the-wild from open-ended language instructions constitutes a fundamental challenge in robotics.
An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning in order to be applicable in arbitrary scenarios.
Recent works exploit... | Georgios Tziafas;Hamidreza Kasaei | University of Groningen;University of Groningen | Poster | main | Foundation Models for Robotics;Open-World Grasping;Open-Ended23 Visual Grounding;Robot Planning | https://github.com/gtziafas/OWG | https://openreview.net/forum?id=QUzwHYJ9Hf | 14 | Towards Open-World Grasping with Large Vision-Language Models
The ability to grasp objects in-the-wild from open-ended language instructions constitutes a fundamental challenge in robotics.
An open-world grasping system should be able to combine high-level contextual with low-level physical-geometric reasoning in order... | [
-0.04420611262321472,
-0.003923245705664158,
-0.008276949636638165,
-0.011781434528529644,
-0.014654270373284817,
-0.021541591733694077,
-0.008567039854824543,
-0.006742274854332209,
-0.019426735118031502,
-0.006597229279577732,
-0.027830014005303383,
-0.006620623636990786,
0.029476981610059... | |
corl_2024_Qoy12gkH4C | Qoy12gkH4C | corl | 2,024 | Progressive Multi-Modal Fusion for Robust 3D Object Detection | Multi-sensor fusion is crucial for accurate 3D object detection in autonomous driving, with cameras and LiDAR being the most commonly used sensors. However, existing methods perform sensor fusion in a single view by projecting features from both modalities either in Bird's Eye View (BEV) or Perspective View (PV), thus ... | Rohit Mohan;Daniele Cattaneo;Florian Drews;Abhinav Valada | Albert-Ludwigs-Universität Freiburg;Universität Freiburg;;University of Freiburg | Poster | main | 3D Object Detection;Multimodal Learning;Self-Supervised Learning | https://openreview.net/forum?id=Qoy12gkH4C | 3 | Progressive Multi-Modal Fusion for Robust 3D Object Detection
Multi-sensor fusion is crucial for accurate 3D object detection in autonomous driving, with cameras and LiDAR being the most commonly used sensors. However, existing methods perform sensor fusion in a single view by projecting features from both modalities e... | [
-0.04154419153928757,
-0.026325935497879982,
-0.01488027349114418,
-0.013254313729703426,
0.0021580506581813097,
0.00999325979501009,
0.00034911325201392174,
0.01300767995417118,
0.04285957291722298,
0.0241701677441597,
-0.05433264002203941,
-0.016652386635541916,
0.021776901558041573,
0.0... | ||
corl_2024_Qpjo8l8AFW | Qpjo8l8AFW | corl | 2,024 | Leveraging Locality to Boost Sample Efficiency in Robotic Manipulation | Given the high cost of collecting robotic data in the real world, sample efficiency is a consistently compelling pursuit in robotics. In this paper, we introduce SGRv2, an imitation learning framework that enhances sample efficiency through improved visual and action representations. Central to the design of SGRv2 is t... | Tong Zhang;Yingdong Hu;Jiacheng You;Yang Gao | Tsinghua University;Tsinghua University;Tsinghua University;Tsinghua University | Poster | main | Robotic Manipulation;Sample Efficiency | https://github.com/TongZhangTHU/sgr | https://openreview.net/forum?id=Qpjo8l8AFW | 8 | Leveraging Locality to Boost Sample Efficiency in Robotic Manipulation
Given the high cost of collecting robotic data in the real world, sample efficiency is a consistently compelling pursuit in robotics. In this paper, we introduce SGRv2, an imitation learning framework that enhances sample efficiency through improved... | [
-0.08292392641305923,
-0.02080502174794674,
0.012706981040537357,
0.00979168713092804,
-0.008509882725775242,
-0.0038245883770287037,
0.01557600125670433,
-0.012910588644444942,
0.004537215922027826,
0.003449765034019947,
-0.044090356677770615,
0.006043451372534037,
-0.020527373999357224,
... | |
corl_2024_QtCtY8zl2T | QtCtY8zl2T | corl | 2,024 | Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations | In this study, we consider the problem of predicting task success for open-vocabulary manipulation by a manipulator, based on instruction sentences and egocentric images before and after manipulation. Conventional approaches, including multimodal large language models (MLLMs), often fail to appropriately understand det... | Miyu Goko;Motonari Kambara;Daichi Saito;Seitaro Otsuki;Komei Sugiura | Keio University;Keio University;Keio University;Keio University;Keio University | Poster | main | Task Success Prediction;Open-Vocabulary Manipulation;Multi-Level Aligned Visual Representation | https://github.com/keio-smilab24/contrastive-lambda-repformer | https://openreview.net/forum?id=QtCtY8zl2T | 2 | Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations
In this study, we consider the problem of predicting task success for open-vocabulary manipulation by a manipulator, based on instruction sentences and egocentric images before and after manipulation. Conventional appr... | [
-0.009830053895711899,
-0.02678094618022442,
-0.025957198813557625,
0.020849963650107384,
0.00046564615331590176,
0.029563382267951965,
0.01230129599571228,
-0.01940383017063141,
0.0006778755341656506,
0.005450462456792593,
-0.05220728740096092,
-0.0031645633280277252,
-0.0019655530340969563... | |
corl_2024_Qz2N4lWBk3 | Qz2N4lWBk3 | corl | 2,024 | Learning Granular Media Avalanche Behavior for Indirectly Manipulating Obstacles on a Granular Slope | Legged robot locomotion on sand slopes is challenging due to the complex dynamics of granular media and how the lack of solid surfaces can hinder locomotion. A promising strategy, inspired by ghost crabs and other organisms in nature, is to strategically interact with rocks, debris, and other obstacles to facilitate mo... | Haodi Hu;Feifei Qian;Daniel Seita | University of Southern California;University of Southern California;University of Southern California | Poster | main | Granular media;Avalanche dynamics;Legged robots. | https://openreview.net/forum?id=Qz2N4lWBk3 | 1 | Learning Granular Media Avalanche Behavior for Indirectly Manipulating Obstacles on a Granular Slope
Legged robot locomotion on sand slopes is challenging due to the complex dynamics of granular media and how the lack of solid surfaces can hinder locomotion. A promising strategy, inspired by ghost crabs and other organ... | [
-0.08439701795578003,
-0.033313628286123276,
-0.004243699833750725,
0.03379996120929718,
-0.017180554568767548,
-0.007299631368368864,
-0.012438833713531494,
-0.0111855985596776,
-0.027122270315885544,
0.0026350687257945538,
-0.018358970060944557,
-0.010549627244472504,
0.0008236051071435213... | ||
corl_2024_RMkdcKK7jq | RMkdcKK7jq | corl | 2,024 | SLR: Learning Quadruped Locomotion without Privileged Information | Traditional reinforcement learning control for quadruped robots often relies on privileged information, demanding meticulous selection and precise estimation, thereby imposing constraints on the development process. This work proposes a Self-learning Latent Representation (SLR) method, which achieves high-performance c... | Shiyi Chen;Zeyu Wan;Shiyang Yan;Chun Zhang;Weiyi Zhang;Qiang Li;Debing Zhang;Fasih Ud Din Farrukh | Tsinghua University;Tsinghua University;Tsinghua University;Tsinghua University;Shenzhen Technology University;Tsinghua University ;Tsinghua University;Tsinghua University | Poster | main | Locomotion;Reinforcement Learning;Privileged Learning | https://openreview.net/forum?id=RMkdcKK7jq | 4 | SLR: Learning Quadruped Locomotion without Privileged Information
Traditional reinforcement learning control for quadruped robots often relies on privileged information, demanding meticulous selection and precise estimation, thereby imposing constraints on the development process. This work proposes a Self-learning Lat... | [
-0.09330987185239792,
0.0074974484741687775,
0.0027549739461392164,
0.007880019024014473,
-0.03245317190885544,
0.009592254646122456,
0.029635215178132057,
-0.0022907573729753494,
0.01554542500525713,
0.0009325155406259,
0.003944674972444773,
0.007656075060367584,
-0.017803523689508438,
0.... | ||
corl_2024_S2Jwb0i7HN | S2Jwb0i7HN | corl | 2,024 | DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics | A pivotal challenge in robotics is achieving fast, safe, and robust dexterous grasping across a diverse range of objects, an important goal within industrial applications. However, existing methods often have very limited speed, dexterity, and generality, along with limited or no hardware safety guarantees. In this wor... | Tyler Ga Wei Lum;Martin Matak;Viktor Makoviychuk;Ankur Handa;Arthur Allshire;Tucker Hermans;Nathan D. Ratliff;Karl Van Wyk | Stanford University;University of Utah;NVIDIA;Imperial College London;University of Toronto;University of Utah;NVIDIA; | Poster | main | Dexterous Grasping;Geometric Fabrics;Reinforcement Learning;Teacher-Student Distillation;Sim-to-Real Transfer | https://openreview.net/forum?id=S2Jwb0i7HN | 13 | DextrAH-G: Pixels-to-Action Dexterous Arm-Hand Grasping with Geometric Fabrics
A pivotal challenge in robotics is achieving fast, safe, and robust dexterous grasping across a diverse range of objects, an important goal within industrial applications. However, existing methods often have very limited speed, dexterity, a... | [
-0.028807468712329865,
-0.05273425579071045,
0.002874074038118124,
-0.027949536219239235,
-0.04465062543749809,
-0.03858790174126625,
0.00852213054895401,
-0.012096849270164967,
-0.013231226243078709,
0.014518125914037228,
-0.03546121343970299,
0.005099932663142681,
0.02545199915766716,
-0... | ||
corl_2024_S70MgnIA0v | S70MgnIA0v | corl | 2,024 | Robotic Control via Embodied Chain-of-Thought Reasoning | A key limitation of learned robot control policies is their inability to generalize outside their training data.
Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models as the backbone of learned robot policies can substantially improve their r... | Michał Zawalski;William Chen;Karl Pertsch;Oier Mees;Chelsea Finn;Sergey Levine | University of California, Berkeley;University of California, Berkeley;Stanford University;Electrical Engineering & Computer Science Department, University of California, Berkeley;Google;Google | Poster | main | Vision-Language-Action Models;Embodied Chain-of-Thought Reasoning | https://github.com/MichalZawalski/embodied-CoT/ | https://openreview.net/forum?id=S70MgnIA0v | 58 | Robotic Control via Embodied Chain-of-Thought Reasoning
A key limitation of learned robot control policies is their inability to generalize outside their training data.
Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models as the backbone of ... | [
-0.0451742447912693,
-0.030405741184949875,
0.015101210214197636,
0.036542341113090515,
-0.015212113037705421,
-0.06162476912140846,
-0.006584830116480589,
0.01407536305487156,
0.008202156983315945,
0.01155233383178711,
-0.012356376275420189,
-0.013428432866930962,
0.02482365444302559,
0.0... | |
corl_2024_S8jQtafbT3 | S8jQtafbT3 | corl | 2,024 | Autonomous Interactive Correction MLLM for Robust Robotic Manipulation | The ability to reflect on and correct failures is crucial for robotic systems to interact stably with real-life objects. Observing the generalization and reasoning capabilities of Multimodal Large Language Models (MLLMs), previous approaches have aimed to utilize these models to enhance robotic systems accordingly. How... | Chuyan Xiong;Chengyu Shen;Xiaoqi Li;Kaichen Zhou;Jiaming Liu;Ruiping Wang;Hao Dong | Beijing Jiaotong University;Xi'an Jiaotong University;Department of Computer Science, University of Oxford;Peking University;Institute of Computing Technology, Chinese Academy of Sciences;Peking University;Peking University | Poster | main | large language model;robotics | https://openreview.net/forum?id=S8jQtafbT3 | 4 | Autonomous Interactive Correction MLLM for Robust Robotic Manipulation
The ability to reflect on and correct failures is crucial for robotic systems to interact stably with real-life objects. Observing the generalization and reasoning capabilities of Multimodal Large Language Models (MLLMs), previous approaches have ai... | [
-0.022018367424607277,
-0.02482137642800808,
-0.009220422245562077,
0.01508461032062769,
-0.002345444867387414,
-0.01578536257147789,
-0.010981522500514984,
-0.019436649978160858,
-0.006809281650930643,
0.005818086210638285,
-0.03418932482600212,
-0.026093794032931328,
0.006094698794186115,
... | ||
corl_2024_SFJz5iLvur | SFJz5iLvur | corl | 2,024 | Lessons from Learning to Spin “Pens” | In-hand manipulation of pen-like objects is a most basic and important skill in our daily lives, as many tools such as hammers and screwdrivers are similarly shaped. However, current learning-based methods struggle with this task due to a lack of high-quality demonstrations and the significant gap between simulation an... | Jun Wang;Ying Yuan;Haichuan Che;Haozhi Qi;Yi Ma;Jitendra Malik;Xiaolong Wang | ;IIIS, Tsinghua University, Tsinghua University;University of California, San Diego;University of California, Berkeley;University of California, Berkeley;University of California, Berkeley;University of California, San Diego | Poster | main | Dexterous In-Hand Manipulation;Reinforcement Learning | https://github.com/HaozhiQi/penspin | https://openreview.net/forum?id=SFJz5iLvur | 16 | Lessons from Learning to Spin “Pens”
In-hand manipulation of pen-like objects is a most basic and important skill in our daily lives, as many tools such as hammers and screwdrivers are similarly shaped. However, current learning-based methods struggle with this task due to a lack of high-quality demonstrations and the ... | [
-0.07770725339651108,
0.014198659919202328,
0.0025049629621207714,
0.007665790617465973,
-0.024868547916412354,
-0.026280056685209274,
-0.0038166444282978773,
0.00811617262661457,
0.005971051752567291,
0.01246213261038065,
-0.010707033798098564,
0.024385664612054825,
-0.008134745992720127,
... | |
corl_2024_SW8ntpJl0E | SW8ntpJl0E | corl | 2,024 | JA-TN: Pick-and-Place Towel Shaping from Crumpled States based on TransporterNet with Joint-Probability Action Inference | Towel manipulation is a crucial step towards more general cloth manipulation. However, folding a towel from an arbitrarily crumpled state and recovering from a failed folding step remain critical challenges in robotics. We propose joint-probability action inference JA-TN, as a way to improve TransporterNet's operationa... | Halid Abdulrahim Kadi;Kasim Terzić | University of St. Andrews;University of St. Andrews | Poster | main | Cloth Manipulation;Imitation Learning;Sim2Real Transfer | https://openreview.net/forum?id=SW8ntpJl0E | 3 | JA-TN: Pick-and-Place Towel Shaping from Crumpled States based on TransporterNet with Joint-Probability Action Inference
Towel manipulation is a crucial step towards more general cloth manipulation. However, folding a towel from an arbitrarily crumpled state and recovering from a failed folding step remain critical cha... | [
-0.04677219316363335,
-0.01901165209710598,
-0.003464262932538986,
0.040642399340867996,
-0.016754772514104843,
0.033472396433353424,
0.02674819529056549,
0.012055262923240662,
-0.0545365996658802,
0.03859913349151611,
-0.01664332114160061,
0.0012921328889206052,
-0.00028776947874575853,
-... | ||
corl_2024_SfaB20rjVo | SfaB20rjVo | corl | 2,024 | An Open-Source Soft Robotic Platform for Autonomous Aerial Manipulation in the Wild | Aerial manipulation combines the versatility and speed of flying platforms with the functional capabilities of mobile manipulation, which presents significant challenges due to the need for precise localization and control. Traditionally, researchers have relied on off-board perception systems, which are limited to exp... | Erik Bauer;Marc Blöchlinger;Pascal Strauch;Arman Raayatsanati;Cavelti Curdin;Robert K. Katzschmann | BMW Group;ETHZ - ETH Zurich;ETHZ - ETH Zurich;ETH Zurich;ETHZ - ETH Zurich;Swiss Federal Institute of Technology | Poster | main | Aerial Manipulation;Learning-Based Grasping;Autonomous Flight;Robotic Systems;Soft Grasping | https://github.com/srl-ethz/osprey | https://openreview.net/forum?id=SfaB20rjVo | 2 | An Open-Source Soft Robotic Platform for Autonomous Aerial Manipulation in the Wild
Aerial manipulation combines the versatility and speed of flying platforms with the functional capabilities of mobile manipulation, which presents significant challenges due to the need for precise localization and control. Traditionall... | [
-0.050479594618082047,
0.02139654941856861,
-0.04109320044517517,
-0.021581320092082024,
0.009645076468586922,
-0.021747615188360214,
-0.004570786841213703,
0.02749401144683361,
0.051070865243673325,
0.026348426938056946,
-0.030136244371533394,
0.0075525385327637196,
0.036880407482385635,
... | |
corl_2024_Si2krRESZb | Si2krRESZb | corl | 2,024 | TieBot: Learning to Knot a Tie from Visual Demonstration through a Real-to-Sim-to-Real Approach | The tie-knotting task is highly challenging due to the tie's high deformation and long-horizon manipulation actions. This work presents TieBot, a Real-to-Sim-to-Real learning from visual demonstration system for the robots to learn to knot a tie. We introduce the Hierarchical Feature Matching approach to estimate a seq... | Weikun Peng;Jun Lv;Yuwei Zeng;Haonan Chen;Siheng Zhao;Jichen Sun;Cewu Lu;Lin Shao | national university of singapore, National University of Singapore;Shanghai Jiaotong University;National University of Singapore;Nanjing University;Nanjing University;Shanghai Jiaotong University;Shanghai Jiaotong University;National University of Singapore | Poster | main | cloth manipulation;learning from demonstration;robot learning | https://openreview.net/forum?id=Si2krRESZb | 2 | TieBot: Learning to Knot a Tie from Visual Demonstration through a Real-to-Sim-to-Real Approach
The tie-knotting task is highly challenging due to the tie's high deformation and long-horizon manipulation actions. This work presents TieBot, a Real-to-Sim-to-Real learning from visual demonstration system for the robots t... | [
-0.022870825603604317,
-0.03173910453915596,
-0.006427168846130371,
0.018474025651812553,
-0.022646784782409668,
-0.01934218406677246,
-0.0023512609768658876,
0.057205069810152054,
-0.03282196819782257,
0.06810838729143143,
-0.02923731692135334,
-0.02089179866015911,
0.017801903188228607,
... | ||
corl_2024_TzqKmIhcwq | TzqKmIhcwq | corl | 2,024 | Structured Bayesian Meta-Learning for Data-Efficient Visual-Tactile Model Estimation | Estimating visual-tactile models of deformable objects is challenging because vision suffers from occlusion, while touch data is sparse and noisy. We propose a novel data-efficient method for dense heterogeneous model estimation by leveraging experience from diverse training objects. The method is based on Bayesian M... | Shaoxiong Yao;Yifan Zhu;Kris Hauser | University of Illinois, Urbana Champaign;University of Illinois, Urbana-Champaign;Yale University | Poster | main | Multimodal perception;tactile sensing;few-shot learning | https://openreview.net/forum?id=TzqKmIhcwq | 1 | Structured Bayesian Meta-Learning for Data-Efficient Visual-Tactile Model Estimation
Estimating visual-tactile models of deformable objects is challenging because vision suffers from occlusion, while touch data is sparse and noisy. We propose a novel data-efficient method for dense heterogeneous model estimation by le... | [
-0.05256575718522072,
-0.032816264778375626,
-0.035578951239585876,
0.009132740087807178,
0.014065446332097054,
-0.0399843230843544,
0.04080566018819809,
0.013430774211883545,
0.02912023477256298,
0.02370685711503029,
-0.04140299931168556,
0.005334042944014072,
0.020402830094099045,
0.0120... | ||
corl_2024_U5RPcnFhkq | U5RPcnFhkq | corl | 2,024 | FetchBench: A Simulation Benchmark for Robot Fetching | Fetching, which includes approaching, grasping, and retrieving, is a critical challenge for robot manipulation tasks. Existing methods primarily focus on table-top scenarios, which do not adequately capture the complexities of environments where both grasping and planning are essential. To address this gap, we propose ... | Beining Han;Meenal Parakh;Derek Geng;Jack A Defay;Gan Luyang;Jia Deng | Department of Computer Science, Princeton University;Princeton University;Princeton University;Princeton University;Princeton University;Princeton University | Poster | main | Grasping; Benchmark; Imitation Learning | https://github.com/princeton-vl/FetchBench-CORL2024 | https://openreview.net/forum?id=U5RPcnFhkq | 3 | FetchBench: A Simulation Benchmark for Robot Fetching
Fetching, which includes approaching, grasping, and retrieving, is a critical challenge for robot manipulation tasks. Existing methods primarily focus on table-top scenarios, which do not adequately capture the complexities of environments where both grasping and pl... | [
-0.029092969372868538,
-0.046142466366291046,
-0.030036132782697678,
0.0030584800988435745,
-0.014383245259523392,
-0.01129982527345419,
0.019008373841643333,
0.024340875446796417,
-0.01748480275273323,
0.021184906363487244,
-0.0406285859644413,
-0.010329455137252808,
-0.018455171957612038,
... | |
corl_2024_UHxPZgK33I | UHxPZgK33I | corl | 2,024 | RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation | We introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG) that captures the structure of the underlying environment. The ACSG accounts for both low-level information (geometry and semantics) and high-level informat... | Hanxiao Jiang;Binghao Huang;Ruihai Wu;Zhuoran Li;Shubham Garg;Hooshang Nayyeri;Shenlong Wang;Yunzhu Li | University of Illinois, Urbana Champaign;University of Illinois Urbana-Champaign;Peking University;National University of Singapore;Amazon;Amazon;University of Illinois, Urbana Champaign;University of Illinois Urbana-Champaign | Poster | main | Action-Conditioned Scene Graph;Foundation Models for Robotics;Scene Exploration;Robotic Manipulation | https://github.com/Jianghanxiao/RoboEXP | https://openreview.net/forum?id=UHxPZgK33I | 21 | RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation
We introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG) that captures the structure of the underlying environment. The ... | [
-0.0679287537932396,
-0.04511069878935814,
-0.016147153452038765,
-0.008139253593981266,
-0.035334210842847824,
-0.045861292630434036,
-0.020115917548537254,
0.0013065026141703129,
0.024675777181982994,
-0.003335451940074563,
-0.001402672496624291,
0.0011786670656874776,
-0.01531211659312248... | |
corl_2024_URj5TQTAXM | URj5TQTAXM | corl | 2,024 | OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation | We study the problem of teaching humanoid robots manipulation skills by imitating from single video demonstrations. We introduce OKAMI, a method that generates a manipulation plan from a single RGB-D video and derives a policy for execution. At the heart of our approach is object-aware retargeting, which enables the hu... | Jinhan Li;Yifeng Zhu;Yuqi Xie;Zhenyu Jiang;Mingyo Seo;Georgios Pavlakos;Yuke Zhu | Tsinghua University;The University of Texas at Austin;University of Texas at Austin;University of Texas, Austin;University of Texas at Austin;University of Texas at Austin;Computer Science Department, University of Texas, Austin | Poster | main | Humanoid Manipulation;Imitation From Videos;Motion Retargeting | https://openreview.net/forum?id=URj5TQTAXM | 33 | OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation
We study the problem of teaching humanoid robots manipulation skills by imitating from single video demonstrations. We introduce OKAMI, a method that generates a manipulation plan from a single RGB-D video and derives a policy for execut... | [
-0.0374845527112484,
-0.02889125421643257,
-0.015695730224251747,
-0.0034262065310031176,
-0.033243462443351746,
0.010176759213209152,
-0.015408669598400593,
0.007088543381541967,
-0.00668110279366374,
0.004474903456866741,
-0.039373595267534256,
-0.017871834337711334,
0.04526296630501747,
... | ||
corl_2024_UUZ4Yw3lt0 | UUZ4Yw3lt0 | corl | 2,024 | Harmon: Whole-Body Motion Generation of Humanoid Robots from Language Descriptions | Humanoid robots, with their human-like embodiment, have the potential to integrate seamlessly into human environments. Critical to their coexistence and cooperation with humans is the ability to understand natural language communications and exhibit human-like behaviors. This work focuses on generating diverse whole-bo... | Zhenyu Jiang;Yuqi Xie;Jinhan Li;Ye Yuan;Yifeng Zhu;Yuke Zhu | University of Texas, Austin;University of Texas at Austin;Tsinghua University;NVIDIA Research;The University of Texas at Austin;Computer Science Department, University of Texas, Austin | Poster | main | Humanoid Robot;Whole-Body Motion Generation | https://openreview.net/forum?id=UUZ4Yw3lt0 | 9 | Harmon: Whole-Body Motion Generation of Humanoid Robots from Language Descriptions
Humanoid robots, with their human-like embodiment, have the potential to integrate seamlessly into human environments. Critical to their coexistence and cooperation with humans is the ability to understand natural language communications... | [
-0.012261835858225822,
-0.022500144317746162,
0.0066878520883619785,
0.012605278752744198,
-0.026398682966828346,
-0.010386823676526546,
-0.019121408462524414,
-0.03432571515440941,
0.027215519919991493,
-0.023614011704921722,
-0.014192541129887104,
-0.008377219550311565,
0.00294943084008991... | ||
corl_2024_Uaaj4MaVIQ | Uaaj4MaVIQ | corl | 2,024 | D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement | Scene representation is a crucial design choice in robotic manipulation systems. An ideal representation is expected to be 3D, dynamic, and semantic to meet the demands of diverse manipulation tasks. However, previous works often lack all three properties simultaneously. In this work, we introduce D$^3$Fields---**dynam... | Yixuan Wang;Mingtong Zhang;Zhuoran Li;Tarik Kelestemur;Katherine Rose Driggs-Campbell;Jiajun Wu;Li Fei-Fei;Yunzhu Li | University of Illinois, Urbana Champaign;University of Illinois, Urbana Champaign;National University of Singapore;Boston Dynamics AI Institute;;Stanford University;Stanford University;University of Illinois Urbana-Champaign | Poster | main | Implicit 3D Representation;Visual Foundational Model;Zero-Shot Generalization;Robotic Manipulation | https://github.com/WangYixuan12/d3fields | https://openreview.net/forum?id=Uaaj4MaVIQ | 10 | D$^3$Fields: Dynamic 3D Descriptor Fields for Zero-Shot Generalizable Rearrangement
Scene representation is a crucial design choice in robotic manipulation systems. An ideal representation is expected to be 3D, dynamic, and semantic to meet the demands of diverse manipulation tasks. However, previous works often lack a... | [
-0.04922755807638168,
-0.009349980391561985,
-0.04868500679731369,
0.0025070428382605314,
-0.0010811480460688472,
-0.009567001834511757,
0.030473342165350914,
-0.006347859278321266,
0.02566271275281906,
0.020725488662719727,
-0.021195700392127037,
0.01469411887228489,
0.01799464412033558,
... | |
corl_2024_V5x0m6XDSV | V5x0m6XDSV | corl | 2,024 | Differentiable Discrete Elastic Rods for Real-Time Modeling of Deformable Linear Objects | This paper addresses the task of modeling Deformable Linear Objects (DLOs), such as ropes and cables, during dynamic motion over long time horizons. This task presents significant challenges due to the complex dynamics of DLOs. To address these challenges, this paper proposes differentiable Discrete Elastic Rods For de... | Yizhou Chen;Yiting Zhang;Zachary Brei;Tiancheng Zhang;Yuzhen Chen;Julie Wu;Ram Vasudevan | University of Michigan - Ann Arbor;University of Michigan - Ann Arbor;;;University of Michigan - Ann Arbor;; | Poster | main | Deformable Linear Objects Modeling;Physics-Informed Learning;Differentiable Simulation | https://github.com/roahmlab/DEFORM | https://openreview.net/forum?id=V5x0m6XDSV | 4 | Differentiable Discrete Elastic Rods for Real-Time Modeling of Deformable Linear Objects
This paper addresses the task of modeling Deformable Linear Objects (DLOs), such as ropes and cables, during dynamic motion over long time horizons. This task presents significant challenges due to the complex dynamics of DLOs. To ... | [
0.01565210521221161,
-0.0029738096054643393,
-0.016647901386022568,
0.002217909786850214,
-0.02889619581401348,
-0.0033630754332989454,
0.002532490761950612,
-0.010012278333306313,
0.007934684865176678,
0.009451011195778847,
-0.024098267778754234,
-0.018992548808455467,
0.03765920177102089,
... | |
corl_2024_VFs1vbQnYN | VFs1vbQnYN | corl | 2,024 | Sim-to-Real Transfer via 3D Feature Fields for Vision-and-Language Navigation | Vision-and-language navigation (VLN) enables the agent to navigate to a remote location in 3D environments following the natural language instruction. In this field, the agent is usually trained and evaluated in the navigation simulators, lacking effective approaches for sim-to-real transfer. The VLN agents with only a... | Zihan Wang;Xiangyang Li;Jiahao Yang;Yeqi Liu;Shuqiang Jiang | Chinese Academy of Sciences;Institute of Computing Technology, Chinese Academy of Sciences;;;Institute of Computing Technology, Chinese Academy of Sciences | Poster | main | Vision-and-Language Navigation;3D Feature Fields;Semantic Traversable Map | https://github.com/MrZihan/Sim2Real-VLN-3DFF | https://openreview.net/forum?id=VFs1vbQnYN | 11 | Sim-to-Real Transfer via 3D Feature Fields for Vision-and-Language Navigation
Vision-and-language navigation (VLN) enables the agent to navigate to a remote location in 3D environments following the natural language instruction. In this field, the agent is usually trained and evaluated in the navigation simulators, lac... | [
-0.07823990285396576,
-0.01621154136955738,
0.016723869368433952,
-0.016431109979748726,
-0.008092047646641731,
0.020419955253601074,
-0.009162448346614838,
-0.004274741746485233,
0.021426314488053322,
0.0067471847869455814,
-0.005695081781595945,
0.020566334947943687,
0.018901266157627106,
... | |
corl_2024_VMqg1CeUQP | VMqg1CeUQP | corl | 2,024 | DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands | Achieving human-like dexterous manipulation remains a crucial area of research in robotics. Current research focuses on improving the success rate of pick-and-place tasks. Compared with pick-and-place, throwing-catching behavior has the potential to increase the speed of transporting objects to their destination. Howev... | Fengbo Lan;Shengjie Wang;Yunzhe Zhang;Haotian Xu;Oluwatosin OluwaPelumi Oseni;Ziye Zhang;Yang Gao;Tao Zhang | ;Tsinghua University;;Tsinghua University;;Tsinghua University;Tsinghua University; | Poster | main | Reinforcement Learning;Dexterous Manipulation;System Stability | https://openreview.net/forum?id=VMqg1CeUQP | 4 | DexCatch: Learning to Catch Arbitrary Objects with Dexterous Hands
Achieving human-like dexterous manipulation remains a crucial area of research in robotics. Current research focuses on improving the success rate of pick-and-place tasks. Compared with pick-and-place, throwing-catching behavior has the potential to inc... | [
-0.02651555836200714,
-0.003764612367376685,
-0.03265462815761566,
0.0333823598921299,
0.006087755784392357,
0.008345589973032475,
0.0034543932415544987,
0.010346852242946625,
-0.016765819862484932,
0.02642226032912731,
-0.026496898382902145,
-0.01722298376262188,
0.00026838024496100843,
0... | ||
corl_2024_VUhlMfEekm | VUhlMfEekm | corl | 2,024 | Implicit Grasp Diffusion: Bridging the Gap between Dense Prediction and Sampling-based Grasping | There are two dominant approaches in modern robot grasp planning: dense prediction and sampling-based methods. Dense prediction calculates viable grasps across the robot’s view but is limited to predicting one grasp per voxel. Sampling-based methods, on the other hand, encode multi-modal grasp distributions, allowing f... | Pinhao Song;Pengteng Li;Renaud Detry | KU Leuven;; | Poster | main | Grasping;Implicit Neural Representations;Diffusion Models | https://gitlab.kuleuven.be/detry-lab/public/implicit-grasp-diffusion.git | https://openreview.net/forum?id=VUhlMfEekm | 2 | Implicit Grasp Diffusion: Bridging the Gap between Dense Prediction and Sampling-based Grasping
There are two dominant approaches in modern robot grasp planning: dense prediction and sampling-based methods. Dense prediction calculates viable grasps across the robot’s view but is limited to predicting one grasp per voxe... | [
-0.08233330398797989,
-0.04915871098637581,
0.015701383352279663,
-0.018142353743314743,
-0.03560613840818405,
-0.05749005824327469,
0.0057160197757184505,
0.005650047678500414,
0.013279261067509651,
0.02171427756547928,
-0.05967656522989273,
0.008373756892979145,
-0.01033878605812788,
0.0... | |
corl_2024_VdyIhsh1jU | VdyIhsh1jU | corl | 2,024 | Legolas: Deep Leg-Inertial Odometry | Estimating odometry, where an accumulating position and rotation is tracked, has critical applications in many areas of robotics as a form of state estimation such as in SLAM, navigation, and controls. During deployment of a legged robot, a vision system's tracking can easily get lost. Instead, using only the onboard l... | Justin Wasserman;Ananye Agarwal;Rishabh Jangir;Girish Chowdhary;Deepak Pathak;Abhinav Gupta | University of Illinois, Urbana Champaign;Carnegie Mellon University;University of California, San Diego;University of Illinois, Urbana Champaign;Carnegie Mellon University;Carnegie Mellon University | Poster | main | State and Odometry Estimation;Quadruped robots;Sim-to-Real | https://openreview.net/forum?id=VdyIhsh1jU | 1 | Legolas: Deep Leg-Inertial Odometry
Estimating odometry, where an accumulating position and rotation is tracked, has critical applications in many areas of robotics as a form of state estimation such as in SLAM, navigation, and controls. During deployment of a legged robot, a vision system's tracking can easily get los... | [
-0.06463944166898727,
0.02090521529316902,
-0.04071111977100372,
0.01623314619064331,
-0.03129369765520096,
-0.0071363551542162895,
-0.0028238529339432716,
-0.019091352820396423,
0.006966878194361925,
-0.004257536493241787,
-0.000862843997310847,
0.006261487491428852,
0.04708711802959442,
... | ||
corl_2024_VoC3wF6fbh | VoC3wF6fbh | corl | 2,024 | Learning to Open and Traverse Doors with a Legged Manipulator | Using doors is a longstanding challenge in robotics and is of significant practical interest in giving robots greater access to human-centric spaces. The task is challenging due to the need for online adaptation to varying door properties and precise control in manipulating the door panel and navigating through the con... | Mike Zhang;Yuntao Ma;Takahiro Miki;Marco Hutter | ETHZ - ETH Zurich;ETHZ - ETH Zurich;;ETHZ - ETH Zurich | Poster | main | Mobile Manipulation;Legged Manipulator;Reinforcement Learning;Door Opening | https://openreview.net/forum?id=VoC3wF6fbh | 9 | Learning to Open and Traverse Doors with a Legged Manipulator
Using doors is a longstanding challenge in robotics and is of significant practical interest in giving robots greater access to human-centric spaces. The task is challenging due to the need for online adaptation to varying door properties and precise control... | [
-0.0367085300385952,
-0.016565704718232155,
-0.011869537644088268,
0.01954026333987713,
-0.028215259313583374,
0.027316195890307426,
-0.0247720405459404,
-0.005121786613017321,
-0.013285082764923573,
0.02192182093858719,
-0.020869726315140724,
0.02442771941423416,
0.006762097589671612,
0.0... | ||
corl_2024_WLOTZHmmO6 | WLOTZHmmO6 | corl | 2,024 | Let Occ Flow: Self-Supervised 3D Occupancy Flow Prediction | Accurate perception of the dynamic environment is a fundamental task for autonomous driving and robot systems. This paper introduces Let Occ Flow, the first self-supervised work for joint 3D occupancy and occupancy flow prediction using only camera inputs, eliminating the need for 3D annotations. Utilizing TPV for unif... | Yili Liu;Linzhan Mou;Xuan Yu;Chenrui Han;Sitong Mao;Rong Xiong;Yue Wang | Zhejiang University;;Zhejiang University;Zhejiang University;The Hong Kong Polytechnic University;Zhejiang University;Zhejiang University | Poster | main | 3D occupancy prediction;occupancy flow;Neural Radiance Field | https://github.com/eliliu2233/occ-flow | https://openreview.net/forum?id=WLOTZHmmO6 | 10 | Let Occ Flow: Self-Supervised 3D Occupancy Flow Prediction
Accurate perception of the dynamic environment is a fundamental task for autonomous driving and robot systems. This paper introduces Let Occ Flow, the first self-supervised work for joint 3D occupancy and occupancy flow prediction using only camera inputs, elim... | [
-0.09578081965446472,
0.0023862444795668125,
-0.05311347916722298,
0.016156576573848724,
-0.01030820794403553,
-0.009098993614315987,
-0.007958745583891869,
-0.005742619279772043,
0.025434883311390877,
0.029076319187879562,
0.010565683245658875,
-0.046566251665353775,
0.03133842349052429,
... | |
corl_2024_WjDR48cL3O | WjDR48cL3O | corl | 2,024 | Continuous Control with Coarse-to-fine Reinforcement Learning | Despite recent advances in improving the sample-efficiency of reinforcement learning (RL) algorithms, designing an RL algorithm that can be practically deployed in real-world environments remains a challenge. In this paper, we present Coarse-to-fine Reinforcement Learning (CRL), a framework that trains RL agents to zoo... | Younggyo Seo;Jafar Uruç;Stephen James | Dyson;London Dyson Robot Learning Lab;Dyson | Poster | main | Reinforcement Learning;Sample-Efficient;Action Discretization | https://github.com/younggyoseo/CQN | https://openreview.net/forum?id=WjDR48cL3O | 8 | Continuous Control with Coarse-to-fine Reinforcement Learning
Despite recent advances in improving the sample-efficiency of reinforcement learning (RL) algorithms, designing an RL algorithm that can be practically deployed in real-world environments remains a challenge. In this paper, we present Coarse-to-fine Reinforc... | [
-0.056415021419525146,
0.02213289402425289,
-0.0049275970086455345,
-0.035548437386751175,
-0.02185760997235775,
-0.003700285917147994,
-0.004083390347659588,
0.012369461357593536,
-0.011617016978561878,
0.024628808721899986,
-0.013681652024388313,
-0.002945546992123127,
-0.03031802736222744... | |
corl_2024_WmWbswjTsi | WmWbswjTsi | corl | 2,024 | Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision | We introduce Cloth-Splatting, a method for estimating 3D states of cloth from RGB images through a prediction-update framework. Cloth-Splatting leverages an action-conditioned dynamics model for predicting future states and uses 3D Gaussian Splatting to update the predicted states. Our key insight is that coupling a 3D... | Alberta Longhini;Marcel Büsching;Bardienus Pieter Duisterhof;Jens Lundell;Jeffrey Ichnowski;Mårten Björkman;Danica Kragic | KTH Royal Institute of Technology;KTH Royal Institute of Technology;Naver Labs Europe;KTH Royal Institute of Technology;Carnegie Mellon University;KTH;KTH Royal Institute of Technology, Stockholm, Sweden | Poster | main | 3D State Representations;Gaussian Splatting;Deformable Objects;Vision-based Tracking | https://openreview.net/forum?id=WmWbswjTsi | 3 | Cloth-Splatting: 3D Cloth State Estimation from RGB Supervision
We introduce Cloth-Splatting, a method for estimating 3D states of cloth from RGB images through a prediction-update framework. Cloth-Splatting leverages an action-conditioned dynamics model for predicting future states and uses 3D Gaussian Splatting to up... | [
-0.07458338141441345,
-0.03764507919549942,
-0.039207421243190765,
0.014870177954435349,
-0.013317132368683815,
-0.010378435254096985,
0.014405193738639355,
0.007342092227190733,
0.0022760950960218906,
0.04270410165190697,
-0.0493626669049263,
0.013345031067728996,
0.01583734340965748,
0.0... | ||
corl_2024_WnSl42M9Z4 | WnSl42M9Z4 | corl | 2,024 | HumanPlus: Humanoid Shadowing and Imitation from Humans | One of the key arguments for building robots that have similar form factors to human beings is that we can leverage the massive human data for training.Yet, doing so has remained challenging in practice due to the complexities in humanoid perception and control, lingering physical gaps between humanoids and humans in m... | Zipeng Fu;Qingqing Zhao;Qi Wu;Gordon Wetzstein;Chelsea Finn | Stanford University;Stanford University;;Stanford University;Google | Poster | main | Humanoids;Learning from Human Data;Whole-Body Control | https://github.com/MarkFzp/humanplus | https://openreview.net/forum?id=WnSl42M9Z4 | 109 | HumanPlus: Humanoid Shadowing and Imitation from Humans
One of the key arguments for building robots that have similar form factors to human beings is that we can leverage the massive human data for training.Yet, doing so has remained challenging in practice due to the complexities in humanoid perception and control, l... | [
-0.05477925017476082,
-0.026008794084191322,
-0.02911093458533287,
0.010715623386204243,
-0.01870741695165634,
-0.05008820816874504,
-0.014214988797903061,
0.013912340626120567,
-0.007802638225257397,
0.0010645872680470347,
-0.04948291555047035,
-0.01528371311724186,
-0.005844885483384132,
... | |
corl_2024_X3OfR3axX4 | X3OfR3axX4 | corl | 2,024 | Multi-Transmotion: Pre-trained Model for Human Motion Prediction | The ability of intelligent systems to predict human behaviors is essential, particularly in fields such as autonomous vehicle navigation and social robotics. However, the intricacies of human motion have precluded the development of a standardized dataset and model for human motion prediction, thereby hindering the est... | Yang Gao;Po-Chien Luan;Alexandre Alahi | EPFL - EPF Lausanne;EPFL - EPF Lausanne;EPFL | Poster | main | Human motion prediction;Pre-training;Transformer | https://github.com/vita-epfl/multi-transmotion | https://openreview.net/forum?id=X3OfR3axX4 | 8 | Multi-Transmotion: Pre-trained Model for Human Motion Prediction
The ability of intelligent systems to predict human behaviors is essential, particularly in fields such as autonomous vehicle navigation and social robotics. However, the intricacies of human motion have precluded the development of a standardized dataset... | [
-0.015171503648161888,
-0.05512189492583275,
-0.05727335438132286,
0.0222935788333416,
-0.02032759040594101,
0.017026210203766823,
-0.009625929407775402,
0.013706285506486893,
0.0012519272277131677,
0.0010311012156307697,
-0.04833366721868515,
-0.03846662491559982,
0.0051375385373830795,
0... | |
corl_2024_XopATjibyz | XopATjibyz | corl | 2,024 | Learning Quadruped Locomotion Using Differentiable Simulation | This work explores the potential of using differentiable simulation for learning robot control. Differentiable simulation promises fast convergence and stable training by computing low-variance first-order gradients using the robot model. Still, so far, its usage for legged robots is limited to simulation. The main cha... | Yunlong Song;Sang bae Kim;Davide Scaramuzza | ;Massachusetts Institute of Technology; | Poster | main | Differentiable Simulation;Legged Locomotion;Reinforcement Learning | https://openreview.net/forum?id=XopATjibyz | 15 | Learning Quadruped Locomotion Using Differentiable Simulation
This work explores the potential of using differentiable simulation for learning robot control. Differentiable simulation promises fast convergence and stable training by computing low-variance first-order gradients using the robot model. Still, so far, its ... | [
-0.04389723762869835,
-0.017388224601745605,
-0.013433435000479221,
0.009148316457867622,
-0.03336336836218834,
0.011387222446501255,
0.02088422141969204,
0.013066401705145836,
0.011873542331159115,
0.007253504358232021,
-0.0327761135995388,
-0.027784455567598343,
0.01881047897040844,
-0.0... | ||
corl_2024_XrxLGzF0lJ | XrxLGzF0lJ | corl | 2,024 | So You Think You Can Scale Up Autonomous Robot Data Collection? | A long-standing goal in robot learning is to develop methods for robots to acquire new skills autonomously. While reinforcement learning (RL) comes with the promise of enabling autonomous data collection, it remains challenging to scale in the real-world partly due to the significant effort required for environment des... | Suvir Mirchandani;Suneel Belkhale;Joey Hejna;Evelyn Choi;Md Sazzad Islam;Dorsa Sadigh | Stanford University;Stanford University;Stanford University;Stanford University;Stanford University;Google | Poster | main | autonomous data collection;imitation learning | https://openreview.net/forum?id=XrxLGzF0lJ | 4 | So You Think You Can Scale Up Autonomous Robot Data Collection?
A long-standing goal in robot learning is to develop methods for robots to acquire new skills autonomously. While reinforcement learning (RL) comes with the promise of enabling autonomous data collection, it remains challenging to scale in the real-world p... | [
-0.08239630609750748,
0.003097291337326169,
-0.004931519273668528,
0.01765502616763115,
-0.03538434952497482,
0.00021128449589014053,
-0.00802881084382534,
0.03464137017726898,
-0.017478568479418755,
0.022177906706929207,
-0.027880266308784485,
-0.003078717039898038,
-0.015026738867163658,
... | ||
corl_2024_YOFrRTDC6d | YOFrRTDC6d | corl | 2,024 | SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment | Imitation learning from human demonstrations is an effective paradigm for robot manipulation, but acquiring large datasets is costly and resource-intensive, especially for long-horizon tasks. To address this issue, we propose SkillGen, an automated system for generating demonstration datasets from a few human demos. Sk... | Caelan Reed Garrett;Ajay Mandlekar;Bowen Wen;Dieter Fox | NVIDIA;NVIDIA;NVIDIA;Department of Computer Science | Poster | main | Imitation Learning;Manipulation;Planning | https://openreview.net/forum?id=YOFrRTDC6d | 10 | SkillMimicGen: Automated Demonstration Generation for Efficient Skill Learning and Deployment
Imitation learning from human demonstrations is an effective paradigm for robot manipulation, but acquiring large datasets is costly and resource-intensive, especially for long-horizon tasks. To address this issue, we propose ... | [
-0.03838079795241356,
-0.024254022166132927,
-0.027226150035858154,
-0.011035396717488766,
-0.02880394458770752,
-0.01468634232878685,
-0.030528511852025986,
0.007866045460104942,
0.012530633248388767,
-0.0020135240629315376,
-0.01787862740457058,
0.0023357339669018984,
-0.008709982968866825... | ||
corl_2024_Yce2jeILGt | Yce2jeILGt | corl | 2,024 | Open-TeleVision: Teleoperation with Immersive Active Visual Feedback | Teleoperation serves as a powerful method for collecting on-robot data essential for robot learning from demonstrations. The intuitiveness and ease of use of the teleoperation system are crucial for ensuring high-quality, diverse, and scalable data. To achieve this, we propose an immersive teleoperation system $\textbf... | Xuxin Cheng;Jialong Li;Shiqi Yang;Ge Yang;Xiaolong Wang | University of California, San Diego;University of California, San Diego;University of California, San Diego;Massachusetts Institute of Technology;University of California, San Diego | Poster | main | Teleoperation;VR/AR;Imitation Learning | https://github.com/OpenTeleVision/TeleVision | https://openreview.net/forum?id=Yce2jeILGt | 99 | Open-TeleVision: Teleoperation with Immersive Active Visual Feedback
Teleoperation serves as a powerful method for collecting on-robot data essential for robot learning from demonstrations. The intuitiveness and ease of use of the teleoperation system are crucial for ensuring high-quality, diverse, and scalable data. T... | [
-0.07209336757659912,
-0.013924136757850647,
-0.0222447719424963,
0.03963819146156311,
-0.0019238530658185482,
0.0022141351364552975,
0.009336034767329693,
0.001700559281744063,
0.02813032828271389,
0.017468633130192757,
-0.021812286227941513,
-0.03574582561850548,
0.028788458555936813,
-0... | |
corl_2024_Yw5QGNBkEN | Yw5QGNBkEN | corl | 2,024 | Scaling Manipulation Learning with Visual Kinematic Chain Prediction | Learning general-purpose models from diverse datasets has achieved great success in machine learning. In robotics, however, existing methods in multi-task learning are typically constrained to a single robot and workspace, while recent work such as RT-X requires a non-trivial action normalization procedure to manually ... | Xinyu Zhang;Yuhan Liu;Haonan Chang;Abdeslam Boularias | Rutgers University;Rutgers University;Rutgers, New Brunswick;, Rutgers University | Poster | main | Multi-Task Robot Learning;Manipulation | https://openreview.net/forum?id=Yw5QGNBkEN | 1 | Scaling Manipulation Learning with Visual Kinematic Chain Prediction
Learning general-purpose models from diverse datasets has achieved great success in machine learning. In robotics, however, existing methods in multi-task learning are typically constrained to a single robot and workspace, while recent work such as RT... | [
-0.03825439140200615,
-0.016999948769807816,
-0.05610163137316704,
0.036559805274009705,
0.00048448951565660536,
0.016936851665377617,
-0.01390822883695364,
0.027221549302339554,
0.023147331550717354,
-0.008522508665919304,
0.007229034323245287,
-0.017739076167345047,
0.030899163335561752,
... | ||
corl_2024_ZMnD6QZAE6 | ZMnD6QZAE6 | corl | 2,024 | OpenVLA: An Open-Source Vision-Language-Action Model | Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-action (VLA) models to obtain robust, generalizable polic... | Moo Jin Kim;Karl Pertsch;Siddharth Karamcheti;Ted Xiao;Ashwin Balakrishna;Suraj Nair;Rafael Rafailov;Ethan P Foster;Pannag R Sanketi;Quan Vuong;Thomas Kollar;Benjamin Burchfiel;Russ Tedrake;Dorsa Sadigh;Sergey Levine;Percy Liang;Chelsea Finn | Stanford University;Stanford University;Stanford University;;Toyota Research Institute;Toyota Research Institute;Stanford University;Stanford University;Google;physical intelligence;Toyota Research Institute;Dexterous Manipulation Group, Toyota Research Institute;Massachusetts Institute of Technology;Stanford Universit... | Poster | main | Vision-Language-Action Models;Generalist Policies;Large-scale Robot Learning;Robotic Manipulation;Robotics;Vision-Language Models | https://github.com/openvla/openvla | https://openreview.net/forum?id=ZMnD6QZAE6 | 437 | OpenVLA: An Open-Source Vision-Language-Action Model
Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-acti... | [
-0.0022436471190303564,
0.002795591251924634,
-0.01383910235017538,
0.032325178384780884,
-0.03445427119731903,
0.009178240783512592,
0.010775060392916203,
0.03228815272450447,
0.023382991552352905,
0.0003095284046139568,
-0.007257428951561451,
-0.006350249983370304,
0.002668308559805155,
... | |
corl_2024_ZdgaF8fOc0 | ZdgaF8fOc0 | corl | 2,024 | Bridging the gap between Learning-to-plan, Motion Primitives and Safe Reinforcement Learning | Trajectory planning under kinodynamic constraints is fundamental for advanced robotics applications that require dexterous, reactive, and rapid skills in complex environments. These constraints, which may represent task, safety, or actuator limitations, are essential for ensuring the proper functioning of robotic platf... | Piotr Kicki;Davide Tateo;Puze Liu;Jonas Günster;Jan Peters;Krzysztof Walas | IDEAS NCBR Sp.;Technische Universität Darmstadt;TU Darmstadt;Technische Universität Darmstadt;TU Darmstadt;Technical University of Poznan | Poster | main | safe reinforcement learning;motion planning;motion primitives | https://github.com/pkicki/spline_rl/ | https://openreview.net/forum?id=ZdgaF8fOc0 | 2 | Bridging the gap between Learning-to-plan, Motion Primitives and Safe Reinforcement Learning
Trajectory planning under kinodynamic constraints is fundamental for advanced robotics applications that require dexterous, reactive, and rapid skills in complex environments. These constraints, which may represent task, safety... | [
-0.0706537738442421,
-0.007929857820272446,
-0.013270947150886059,
-0.010289656929671764,
-0.025158725678920746,
0.024597980082035065,
0.0160653218626976,
0.006425194442272186,
0.0032943724654614925,
-0.0020665761549025774,
-0.03327082470059395,
-0.034822218120098114,
-0.013224218040704727,
... | |
corl_2024_aaY5fVFMVf | aaY5fVFMVf | corl | 2,024 | Conformal Prediction for Semantically-Aware Autonomous Perception in Urban Environments | We introduce Knowledge-Refined Prediction Sets (KRPS), a novel approach that performs semantically-aware uncertainty quantification for multitask-based autonomous perception in urban environments. KRPS extends conformal prediction (CP) to ensure 2 properties not typically addressed by CP frameworks: semantic label cons... | Achref Doula;Tobias Güdelhöfer;Max Mühlhäuser;Alejandro Sanchez Guinea | ;Technische Universität Darmstadt;Technische Universität Darmstadt;Technische Universität Darmstadt | Poster | main | Uncertainty in Robotics;Robot Perception;Semantics for Robotics | https://gitlab.com/achref.d/krps | https://openreview.net/forum?id=aaY5fVFMVf | 0 | Conformal Prediction for Semantically-Aware Autonomous Perception in Urban Environments
We introduce Knowledge-Refined Prediction Sets (KRPS), a novel approach that performs semantically-aware uncertainty quantification for multitask-based autonomous perception in urban environments. KRPS extends conformal prediction (... | [
-0.07407699525356293,
0.0008498495444655418,
0.008731970563530922,
0.028614714741706848,
-0.019929438829421997,
-0.0608903206884861,
0.007499222178012133,
-0.0023137389216572046,
-0.00012928686919622123,
0.03089343197643757,
-0.02990349754691124,
-0.0022857217118144035,
-0.011038705706596375... | |
corl_2024_adf3pO9baG | adf3pO9baG | corl | 2,024 | Dreaming to Assist: Learning to Align with Human Objectives for Shared Control in High-Speed Racing | Tight coordination is required for effective human-robot teams in domains involving fast dynamics and tactical decisions, such as multi-car racing. In such settings, robot teammates must react to cues of a human teammate's tactical objective to assist in a way that is consistent with the objective (e.g., navigating le... | Jonathan DeCastro;Andrew Silva;Deepak Gopinath;Emily Sumner;Thomas Matrai Balch;Laporsha Dees;Guy Rosman | Toyota Research Institute;Toyota Research Institute;;;Toyota Research Institute;;Toyota Research Institute | Poster | main | Recurrent State-Space Models;Human-Robot Interactions;Shared-Control | https://openreview.net/forum?id=adf3pO9baG | 2 | Dreaming to Assist: Learning to Align with Human Objectives for Shared Control in High-Speed Racing
Tight coordination is required for effective human-robot teams in domains involving fast dynamics and tactical decisions, such as multi-car racing. In such settings, robot teammates must react to cues of a human teammat... | [
-0.04954240843653679,
-0.034921810030937195,
0.010290293022990227,
-0.010979416780173779,
-0.03352493792772293,
-0.018559778109192848,
-0.00036376886419020593,
-0.009396295063197613,
-0.006560644134879112,
0.007408080156892538,
-0.047940660268068314,
-0.026708200573921204,
0.0100947311148047... | ||
corl_2024_bftFwjSJxk | bftFwjSJxk | corl | 2,024 | Rate-Informed Discovery via Bayesian Adaptive Multifidelity Sampling | Ensuring the safety of autonomous vehicles (AVs) requires both accurate estimation of their performance and efficient discovery of potential failure cases. This paper introduces Bayesian adaptive multifidelity sampling (BAMS), which leverages the power of adaptive Bayesian sampling to achieve efficient discovery while ... | Aman Sinha;Payam Nikdel;Supratik Paul;Shimon Whiteson | Princeton University;Waymo;Waymo;University of Oxford | Poster | main | Autonomous Driving;Rare-event Simulation;Adaptive Sampling | https://openreview.net/forum?id=bftFwjSJxk | 0 | Rate-Informed Discovery via Bayesian Adaptive Multifidelity Sampling
Ensuring the safety of autonomous vehicles (AVs) requires both accurate estimation of their performance and efficient discovery of potential failure cases. This paper introduces Bayesian adaptive multifidelity sampling (BAMS), which leverages the powe... | [
0.024055885151028633,
-0.03543290123343468,
-0.03773472085595131,
0.05569644644856453,
-0.01784852333366871,
-0.053319159895181656,
0.04173460230231285,
0.0018725855043157935,
0.010631757788360119,
0.06780929118394852,
-0.0722619891166687,
-0.009424246847629547,
-0.005273427348583937,
0.02... | ||
corl_2024_bk28WlkqZn | bk28WlkqZn | corl | 2,024 | 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing | Tactile and visual perception are both crucial for humans to perform fine-grained interactions with their environment. Developing similar multi-modal sensing capabilities for robots can significantly enhance and expand their manipulation skills. This paper introduces **3D-ViTac**, a multi-modal sensing and learning sys... | Binghao Huang;Yixuan Wang;Xinyi Yang;Yiyue Luo;Yunzhu Li | University of Illinois Urbana-Champaign;University of Illinois, Urbana Champaign;Zhejiang University;Computer Science and Artificial Intelligence Laboratory, Electrical Engineering & Computer Science;University of Illinois Urbana-Champaign | Poster | main | Contact-Rich Manipulation;Multi-Modal Perception;Tactile Sensing;Imitation Learning | https://openreview.net/forum?id=bk28WlkqZn | 18 | 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing
Tactile and visual perception are both crucial for humans to perform fine-grained interactions with their environment. Developing similar multi-modal sensing capabilities for robots can significantly enhance and expand their manipulation skills. Th... | [
-0.08286090195178986,
-0.03426986187696457,
-0.016723617911338806,
0.010778273455798626,
-0.021463066339492798,
-0.005234894342720509,
0.03135327994823456,
0.019686942920088768,
0.004582869820296764,
0.03249373659491539,
-0.019668245688080788,
-0.01679840311408043,
0.02552011050283909,
0.0... | ||
corl_2024_bt0PX0e4rE | bt0PX0e4rE | corl | 2,024 | Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight | Learning visuomotor policies for agile quadrotor flight presents significant difficulties, primarily from inefficient policy exploration caused by high-dimensional visual inputs and the need for precise and low-latency control.
To address these challenges, we propose a novel approach that combines the performance of Re... | Jiaxu Xing;Angel Romero;Leonard Bauersfeld;Davide Scaramuzza | Department of Informatics, University of Zurich, University of Zurich;;University of Zurich; | Poster | main | Quadrotor;Visuomotor Control;Reinforcement Learning | https://openreview.net/forum?id=bt0PX0e4rE | 19 | Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight
Learning visuomotor policies for agile quadrotor flight presents significant difficulties, primarily from inefficient policy exploration caused by high-dimensional visual inputs and the need for precise and low-latency control.
To address... | [
-0.08389715850353241,
-0.030331194400787354,
0.036982934921979904,
-0.015814095735549927,
-0.034666869789361954,
-0.004859105683863163,
0.019899634644389153,
0.05847602337598801,
-0.0022396354470402002,
0.007096424698829651,
-0.040577467530965805,
-0.018408089876174927,
0.0001182640990009531... | ||
corl_2024_cDXnnOhNrF | cDXnnOhNrF | corl | 2,024 | Perceive With Confidence: Statistical Safety Assurances for Navigation with Learning-Based Perception | Rapid advances in perception have enabled large pre-trained models to be used out of the box for transforming high-dimensional, noisy, and partial observations of the world into rich occupancy representations. However, the reliability of these models and consequently their safe integration onto robots remains unknown w... | Anushri Dixit;Zhiting Mei;Meghan Booker;Mariko Storey-Matsutani;Allen Z. Ren;Anirudha Majumdar | Princeton University;Princeton University;Princeton University;;Google DeepMind;Princeton University | Poster | main | Uncertainty quantification;occupancy prediction;robot navigation | https://github.com/irom-lab/perception-guarantees | https://openreview.net/forum?id=cDXnnOhNrF | 8 | Perceive With Confidence: Statistical Safety Assurances for Navigation with Learning-Based Perception
Rapid advances in perception have enabled large pre-trained models to be used out of the box for transforming high-dimensional, noisy, and partial observations of the world into rich occupancy representations. However,... | [
-0.047573186457157135,
-0.00749026658013463,
-0.031746454536914825,
0.023619214072823524,
-0.0031662764959037304,
0.003352254629135132,
0.02568357065320015,
-0.011223776265978813,
0.032676346600055695,
0.029458925127983093,
-0.024288734421133995,
-0.02767353504896164,
0.009266356937587261,
... | |
corl_2024_cGswIOxHcN | cGswIOxHcN | corl | 2,024 | Learning Visual Parkour from Generated Images | Fast and accurate physics simulation is an essential component of robot learning, where robots can explore failure scenarios that are difficult to produce in the real world and learn from unlimited on-policy data. Yet, it remains challenging to incorporate RGB-color perception into the sim-to-real pipeline that matches... | Alan Yu;Ge Yang;Ran Choi;Yajvan Ravan;John Leonard;Phillip Isola | Massachusetts Institute of Technology;Massachusetts Institute of Technology;Massachusetts Institute of Technology;Massachusetts Institute of Technology;Massachusetts Institute of Technology;Massachusetts Institute of Technology | Poster | main | Generative AI;Simulation;Legged Locomotion;Sensory Motor-learning | https://openreview.net/forum?id=cGswIOxHcN | 1 | Learning Visual Parkour from Generated Images
Fast and accurate physics simulation is an essential component of robot learning, where robots can explore failure scenarios that are difficult to produce in the real world and learn from unlimited on-policy data. Yet, it remains challenging to incorporate RGB-color percept... | [
-0.08230286836624146,
-0.034379445016384125,
-0.0003675891784951091,
0.03549264743924141,
-0.030204931274056435,
-0.017820538952946663,
-0.01860905811190605,
0.025362493470311165,
0.027217833325266838,
0.01151238288730383,
-0.053767744451761246,
-0.030742978677153587,
0.005092907696962357,
... | ||
corl_2024_cNI0ZkK1yC | cNI0ZkK1yC | corl | 2,024 | Flow as the Cross-domain Manipulation Interface | We present Im2Flow2Act, a scalable learning framework that enables robots to acquire real-world manipulation skills without the need of real-world robot training data. The key idea behind Im2Flow2Act is to use object flow as the manipulation interface, bridging domain gaps between different embodiments (i.e., human and... | Mengda Xu;Zhenjia Xu;Yinghao Xu;Cheng Chi;Gordon Wetzstein;Manuela Veloso;Shuran Song | Columbia University;Columbia University;Stanford University;Stanford University;Stanford University;School of Computer Science, Carnegie Mellon University;Stanford University | Poster | main | Robots;Learning;cross-domain;cross-embodiment | https://github.com/real-stanford/im2Flow2Act | https://openreview.net/forum?id=cNI0ZkK1yC | 48 | Flow as the Cross-domain Manipulation Interface
We present Im2Flow2Act, a scalable learning framework that enables robots to acquire real-world manipulation skills without the need of real-world robot training data. The key idea behind Im2Flow2Act is to use object flow as the manipulation interface, bridging domain gap... | [
-0.05842634662985802,
-0.005436795763671398,
-0.02670040726661682,
0.02892393432557583,
-0.03017127886414528,
-0.03136439248919487,
-0.030333975329995155,
0.005161114502698183,
-0.0038143438287079334,
0.03284674137830734,
-0.022162964567542076,
-0.034654486924409866,
0.01824919506907463,
-... | |
corl_2024_cT2N3p1AcE | cT2N3p1AcE | corl | 2,024 | Visual Whole-Body Control for Legged Loco-Manipulation | We study the problem of mobile manipulation using legged robots equipped with an arm, namely legged loco-manipulation. The robot legs, while usually utilized for mobility, offer an opportunity to amplify the manipulation capabilities by conducting whole-body control. That is, the robot can control the legs and the arm ... | Minghuan Liu;Zixuan Chen;Xuxin Cheng;Yandong Ji;Ri-Zhao Qiu;Ruihan Yang;Xiaolong Wang | Shanghai Jiaotong University;Fudan University;University of California, San Diego;University of California, San Diego;University of California, San Diego;University of California, San Diego;University of California, San Diego | Poster | main | Robot Learning; Reinforcement Learning; Imitation Learning; Mobile Loco-Manipulation | https://github.com/Ericonaldo/visual_wholebody | https://openreview.net/forum?id=cT2N3p1AcE | 47 | Visual Whole-Body Control for Legged Loco-Manipulation
We study the problem of mobile manipulation using legged robots equipped with an arm, namely legged loco-manipulation. The robot legs, while usually utilized for mobility, offer an opportunity to amplify the manipulation capabilities by conducting whole-body contro... | [
-0.046174902468919754,
-0.027941063046455383,
-0.035605646669864655,
0.05089733749628067,
-0.014898153021931648,
-0.0010019944747909904,
0.01358636561781168,
0.0220942422747612,
0.017362438142299652,
0.016378598287701607,
-0.035680606961250305,
-0.008015955798327923,
0.037142314016819,
0.0... | |
corl_2024_clqzoCrulY | clqzoCrulY | corl | 2,024 | OrbitGrasp: SE(3)-Equivariant Grasp Learning | While grasp detection is an important part of any robotic manipulation pipeline, reliable and accurate grasp detection in $\\mathrm{SE}(3)$ remains a research challenge. Many robotics applications in unstructured environments such as the home or warehouse would benefit a lot from better grasp performance. This paper pr... | Boce Hu;Xupeng Zhu;Dian Wang;Zihao Dong;Haojie Huang;Chenghao Wang;Robin Walters;Robert Platt | Northeastern University;Northeastern University;Northeastern University;Northeastern University;Northeastern University;Northeastern University;Northeastern University ;Northeastern University | Poster | main | Grasp Detection;Equivariance;Symmetry;Grasp Learning | https://openreview.net/forum?id=clqzoCrulY | 13 | OrbitGrasp: SE(3)-Equivariant Grasp Learning
While grasp detection is an important part of any robotic manipulation pipeline, reliable and accurate grasp detection in $\\mathrm{SE}(3)$ remains a research challenge. Many robotics applications in unstructured environments such as the home or warehouse would benefit a lot... | [
-0.05721291899681091,
-0.0023819073103368282,
-0.036187078803777695,
-0.01788610965013504,
0.0018645114032551646,
-0.04356026649475098,
0.017537249252200127,
0.03564021736383438,
0.023892145603895187,
0.047558002173900604,
-0.0059164646081626415,
0.0010365599300712347,
0.01716010458767414,
... | ||
corl_2024_cocHfT7CEs | cocHfT7CEs | corl | 2,024 | Generative Image as Action Models | Image-generation diffusion models have been fine-tuned to unlock new capabilities such as image-editing and novel view synthesis. Can we similarly unlock image-generation models for visuomotor control? We present GENIMA, a behavior-cloning agent that fine-tunes Stable Diffusion to “draw joint-actions” as targets on RGB... | Mohit Shridhar;Yat Long Lo;Stephen James | Dyson;Dyson Robot Learning Lab;Dyson | Poster | main | Diffusion Models;Image Generation;Behavior Cloning;Visuomotor | https://github.com/MohitShridhar/genima | https://openreview.net/forum?id=cocHfT7CEs | 10 | Generative Image as Action Models
Image-generation diffusion models have been fine-tuned to unlock new capabilities such as image-editing and novel view synthesis. Can we similarly unlock image-generation models for visuomotor control? We present GENIMA, a behavior-cloning agent that fine-tunes Stable Diffusion to “dra... | [
-0.04968841373920441,
-0.029231706634163857,
-0.013747493736445904,
0.0004473191511351615,
-0.041315607726573944,
-0.010785932652652264,
-0.01985342800617218,
0.04712904244661331,
0.018564600497484207,
0.027111081406474113,
-0.022266551852226257,
0.007636989001184702,
-0.044533103704452515,
... | |
corl_2024_cq2uB30uBM | cq2uB30uBM | corl | 2,024 | Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents | When we, humans, perform a task, we consider changes in environments such as objects' arrangement due to interactions with objects and other reasons; e.g., when we find a mug to clean, if it is already clean, we skip cleaning it. But even the state-of-the-art embodied agents often ignore changed environments when perfo... | Jinyeon Kim;Cheolhong Min;Byeonghwi Kim;Jonghyun Choi | Yonsei University;Yonsei University;Seoul National University;Yonsei University | Poster | main | Replanning;Environmental Feedback;Brain plasticity;Embodied AI | https://github.com/snumprlab/pred | https://openreview.net/forum?id=cq2uB30uBM | 0 | Pre-emptive Action Revision by Environmental Feedback for Embodied Instruction Following Agents
When we, humans, perform a task, we consider changes in environments such as objects' arrangement due to interactions with objects and other reasons; e.g., when we find a mug to clean, if it is already clean, we skip cleanin... | [
-0.07708834111690521,
-0.016948698088526726,
-0.005396554246544838,
-0.027732549235224724,
-0.002559429267421365,
-0.050392523407936096,
0.0015631957212463021,
0.014625309966504574,
0.015439883805811405,
0.02539990469813347,
-0.02539990469813347,
-0.0411359965801239,
-0.018429741263389587,
... | |
corl_2024_ctzBccpolr | ctzBccpolr | corl | 2,024 | RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning | Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question. Emerging research such as the Open-X Embodiment (OXE) project has shown promise in leveraging skills by combining datasets including different rob... | Lawrence Yunliang Chen;Chenfeng Xu;Karthik Dharmarajan;Richard Cheng;Kurt Keutzer;Masayoshi Tomizuka;Quan Vuong;Ken Goldberg | University of California, Berkeley;University of California, Berkeley;Electrical Engineering & Computer Science Department, University of California, Berkeley;Toyota Research Institute;University of California, Berkeley;physical intelligence;University of California, Berkeley;University of California, Berkeley | Poster | main | Cross-Embodiment Learning;Viewpoint Robust;Data Augmentation | https://openreview.net/forum?id=ctzBccpolr | 21 | RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning
Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question. Emerging research such as the Open-X Embodiment (OXE) project has s... | [
-0.06938105821609497,
-0.03853265941143036,
-0.04243047162890434,
0.048184387385845184,
-0.03736331686377525,
0.003566034371033311,
0.04989200085401535,
0.03478333726525307,
0.025113048031926155,
-0.015025138854980469,
-0.04302442446351051,
-0.04402671754360199,
0.00797195453196764,
0.0240... | ||
corl_2024_cvAIaS6V2I | cvAIaS6V2I | corl | 2,024 | OPEN TEACH: A Versatile Teleoperation System for Robotic Manipulation | Open-sourced, user-friendly tools form the bedrock of scientific advancement across disciplines. The widespread adoption of data-driven learning has led to remarkable progress in multi-fingered dexterity, bimanual manipulation, and applications ranging from logistics to home robotics. However, existing data collection ... | Aadhithya Iyer;Zhuoran Peng;Yinlong Dai;Irmak Guzey;Siddhant Haldar;Soumith Chintala;Lerrel Pinto | New York University;New York University;New York University;New York University;New York University;Meta Facebook;New York University | Poster | main | Teleoperation;Robot Learning;Robotic Manipulation | https://github.com/aadhithya14/Open-Teach | https://openreview.net/forum?id=cvAIaS6V2I | 56 | OPEN TEACH: A Versatile Teleoperation System for Robotic Manipulation
Open-sourced, user-friendly tools form the bedrock of scientific advancement across disciplines. The widespread adoption of data-driven learning has led to remarkable progress in multi-fingered dexterity, bimanual manipulation, and applications rangi... | [
-0.039158839732408524,
-0.03707312420010567,
-0.04780234396457672,
-0.005594790913164616,
-0.008464998565614223,
0.02004917524755001,
-0.0004421575868036598,
0.024709152057766914,
0.028279295191168785,
-0.0021221216302365065,
-0.0193727258592844,
0.0051062447018921375,
0.037524089217185974,
... | |
corl_2024_cvUXoou8iz | cvUXoou8iz | corl | 2,024 | SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation | Robot learning has proven to be a general and effective technique for programming manipulators. Imitation learning is able to teach robots solely from human demonstrations but is bottlenecked by the capabilities of the demonstrations. Reinforcement learning uses exploration to discover better behaviors; however, the sp... | Zihan Zhou;Animesh Garg;Dieter Fox;Caelan Reed Garrett;Ajay Mandlekar | Department of Computer Science, University of Toronto;NVIDIA;Department of Computer Science;NVIDIA;NVIDIA | Poster | main | Reinforcement Learning;Manipulation Planning;Imitation Learning | https://openreview.net/forum?id=cvUXoou8iz | 1 | SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation
Robot learning has proven to be a general and effective technique for programming manipulators. Imitation learning is able to teach robots solely from human demonstrations but is bottlenecked by the capabilities of the demo... | [
-0.06902037560939789,
-0.009072507731616497,
0.012140391394495964,
-0.0018711753655225039,
-0.015250430442392826,
-0.00013348810898605734,
-0.0005380507209338248,
0.018707070499658585,
-0.0037095642182976007,
0.040393032133579254,
-0.00890389084815979,
-0.0015163780190050602,
-0.032055880874... | ||
corl_2024_cvVEkS5yij | cvVEkS5yij | corl | 2,024 | Meta-Control: Automatic Model-based Control Synthesis for Heterogeneous Robot Skills | The requirements for real-world manipulation tasks are diverse and often conflicting; some tasks require precise motion while others require force compliance; some tasks require avoidance of certain regions while others require convergence to certain states. Satisfying these varied requirements with a fixed state-actio... | Tianhao Wei;Liqian Ma;Rui Chen;Weiye Zhao;Changliu Liu | Carnegie Mellon University;;Carnegie Mellon University;Carnegie Mellon University;Carnegie Mellon University | Poster | main | embodied agent;model-based control;LLM;manipulation | https://openreview.net/forum?id=cvVEkS5yij | 4 | Meta-Control: Automatic Model-based Control Synthesis for Heterogeneous Robot Skills
The requirements for real-world manipulation tasks are diverse and often conflicting; some tasks require precise motion while others require force compliance; some tasks require avoidance of certain regions while others require converg... | [
-0.03131106123328209,
-0.020955482497811317,
-0.013606968335807323,
0.013419026508927345,
-0.007860654965043068,
-0.004202842712402344,
-0.024169282987713814,
0.037907809019088745,
0.02063598297536373,
0.020654777064919472,
-0.041422318667173386,
-0.016125384718179703,
0.013184099458158016,
... | ||
corl_2024_dUo6j3YURS | dUo6j3YURS | corl | 2,024 | MOSAIC: Modular Foundation Models for Assistive and Interactive Cooking | We present MOSAIC, a modular architecture for coordinating multiple robots to (a) interact with users using natural language and (b) manipulate an open vocabulary of everyday objects. At several levels, MOSAIC employs modularity: it leverages multiple large-scale pre-trained models for high-level tasks like language an... | Huaxiaoyue Wang;Kushal Kedia;Juntao Ren;Rahma Abdullah;Atiksh Bhardwaj;Angela Chao;Kelly Y Chen;Nathaniel Chin;Prithwish Dan;Xinyi Fan;Gonzalo Gonzalez-Pumariega;Aditya Kompella;Maximus Adrian Pace;Yash Sharma;Xiangwan Sun;Neha Sunkara;Sanjiban Choudhury | Cornell University;Cornell University;Department of Computer Science, Cornell University;Cornell University;Cornell University;Cornell University;Cornell University;Cornell University;Department of Computer Science, Cornell University;Cornell University;Cornell University;Department of Computer Science, Cornell Univers... | Poster | main | Foundation Models;Human-Robot Interaction;Model Learning | https://github.com/portal-cornell/MOSAIC/ | https://openreview.net/forum?id=dUo6j3YURS | 0 | MOSAIC: Modular Foundation Models for Assistive and Interactive Cooking
We present MOSAIC, a modular architecture for coordinating multiple robots to (a) interact with users using natural language and (b) manipulate an open vocabulary of everyday objects. At several levels, MOSAIC employs modularity: it leverages multi... | [
0.00961967185139656,
-0.04507773369550705,
-0.00895787589251995,
-0.0006759769166819751,
-0.0034342464059591293,
-0.005152551457285881,
0.01417187973856926,
-0.021215276792645454,
-0.00871206633746624,
0.03068840689957142,
-0.019229888916015625,
-0.018624819815158844,
0.010267285630106926,
... | |
corl_2024_dXSGw7Cy55 | dXSGw7Cy55 | corl | 2,024 | Contrast Sets for Evaluating Language-Guided Robot Policies | Robot evaluations in language-guided, real world settings are time-consuming and often sample only a small space of potential instructions across complex scenes. In this work, we introduce contrast sets for robotics as an approach to make small, but specific, perturbations to otherwise independent, identically distribu... | Abrar Anwar;Rohan Gupta;Jesse Thomason | University of Southern California;University of Southern California;Amazon | Poster | main | Evaluation;Language-guided robots | https://openreview.net/forum?id=dXSGw7Cy55 | 3 | Contrast Sets for Evaluating Language-Guided Robot Policies
Robot evaluations in language-guided, real world settings are time-consuming and often sample only a small space of potential instructions across complex scenes. In this work, we introduce contrast sets for robotics as an approach to make small, but specific, ... | [
-0.0370430164039135,
-0.04078096151351929,
-0.029940925538539886,
0.009485030546784401,
-0.007195540703833103,
-0.025548841804265976,
0.01126989908516407,
0.025212427601218224,
0.02579180710017681,
0.018119679763913155,
-0.03752895072102547,
-0.011522210203111172,
0.00869071763008833,
-0.0... | ||
corl_2024_deywgeWmL5 | deywgeWmL5 | corl | 2,024 | TLDR: Unsupervised Goal-Conditioned RL via Temporal Distance-Aware Representations | Unsupervised goal-conditioned reinforcement learning (GCRL) is a promising paradigm for developing diverse robotic skills without external supervision. However, existing unsupervised GCRL methods often struggle to cover a wide range of states in complex environments due to their limited exploration and sparse or noisy ... | Junik Bae;Kwanyoung Park;Youngwoon Lee | Seoul National University;Seoul National University;University of California, Berkeley | Poster | main | Unsupervised Goal-Conditioned Reinforcement Learning;Temporal Distance-Aware Representations | https://github.com/heatz123/tldr | https://openreview.net/forum?id=deywgeWmL5 | 2 | TLDR: Unsupervised Goal-Conditioned RL via Temporal Distance-Aware Representations
Unsupervised goal-conditioned reinforcement learning (GCRL) is a promising paradigm for developing diverse robotic skills without external supervision. However, existing unsupervised GCRL methods often struggle to cover a wide range of s... | [
-0.11180917173624039,
-0.00524105504155159,
0.005042320117354393,
-0.020612966269254684,
-0.04440569132566452,
0.013819007202982903,
0.01165603194385767,
-0.015297964215278625,
-0.012931632809340954,
-0.008060317486524582,
-0.026288464665412903,
-0.033498384058475494,
-0.009853553026914597,
... | |
corl_2024_dsxmR6lYlg | dsxmR6lYlg | corl | 2,024 | Reinforcement Learning with Foundation Priors: Let Embodied Agent Efficiently Learn on Its Own | Reinforcement learning (RL) is a promising approach for solving robotic manipulation tasks.
However, it is challenging to apply the RL algorithms directly in the real world.
For one thing, RL is data-intensive and typically requires millions of interactions with environments, which are impractical in real scenarios.
F... | Weirui Ye;Yunsheng Zhang;Haoyang Weng;Xianfan Gu;Shengjie Wang;Tong Zhang;Mengchen Wang;Pieter Abbeel;Yang Gao | Tsinghua University;;Shanghai Qi Zhi Institute;Tsinghua University;Tsinghua University;Tsinghua University;Covariant;Tsinghua University;Tsinghua University | Poster | main | Reinforcement Learning;Foundation Models;Robotics;VLMs | https://github.com/YeWR/RLFP | https://openreview.net/forum?id=dsxmR6lYlg | 3 | Reinforcement Learning with Foundation Priors: Let Embodied Agent Efficiently Learn on Its Own
Reinforcement learning (RL) is a promising approach for solving robotic manipulation tasks.
However, it is challenging to apply the RL algorithms directly in the real world.
For one thing, RL is data-intensive and typically r... | [
-0.04944833368062973,
-0.00931541621685028,
0.005733273923397064,
-0.0173106100410223,
-0.020052609965205193,
-0.050039201974868774,
0.03345794603228569,
0.008456810377538204,
0.014633235521614552,
0.0036536927800625563,
-0.05387984961271286,
-0.02937725931406021,
-0.033642593771219254,
0.... | |
corl_2024_eJHy0AF5TO | eJHy0AF5TO | corl | 2,024 | RiEMann: Near Real-Time SE(3)-Equivariant Robot Manipulation without Point Cloud Segmentation | We present RiEMann, an end-to-end near Real-time SE(3)-Equivariant Robot Manipulation imitation learning framework from scene point cloud input. Compared to previous methods that rely on descriptor field matching, RiEMann directly predicts the target actions for manipulation without any object segmentation. RiEMann can... | Chongkai Gao;Zhengrong Xue;Shuying Deng;Tianhai Liang;Siqi Yang;Lin Shao;Huazhe Xu | National University of Singapore;Tsinghua University;Tsinghua University;Harbin Institute of Technology, Shenzhen;Tsinghua University;National University of Singapore;Tsinghua University | Poster | main | SE(3)-Equivariance;Manipulation;Imitation Learning | https://github.com/HeegerGao/RiEMann | https://openreview.net/forum?id=eJHy0AF5TO | 16 | RiEMann: Near Real-Time SE(3)-Equivariant Robot Manipulation without Point Cloud Segmentation
We present RiEMann, an end-to-end near Real-time SE(3)-Equivariant Robot Manipulation imitation learning framework from scene point cloud input. Compared to previous methods that rely on descriptor field matching, RiEMann dire... | [
-0.0386669859290123,
0.008513959124684334,
-0.007782471366226673,
-0.013500111177563667,
-0.03088914230465889,
0.03153729811310768,
0.021870549768209457,
0.03533362224698067,
0.02492613159120083,
0.027185408398509026,
-0.021648326888680458,
0.025148354470729828,
0.016500135883688927,
0.021... | |
corl_2024_eTRncsYYdv | eTRncsYYdv | corl | 2,024 | Solving Offline Reinforcement Learning with Decision Tree Regression | This study presents a novel approach to addressing offline reinforcement learning (RL) problems by reframing them as regression tasks that can be effectively solved using Decision Trees. Mainly, we introduce two distinct frameworks: return-conditioned and return-weighted decision tree policies (RCDTP and RWDTP), both o... | Prajwal Koirala;Cody Fleming | Iowa State University;Iowa State University | Poster | main | Offline Reinforcement Learning;Decision Trees | https://github.com/PrajwalKoirala/Offline-Reinforcement-Learning-with-Decision-Tree-Regression/tree/main | https://openreview.net/forum?id=eTRncsYYdv | 2 | Solving Offline Reinforcement Learning with Decision Tree Regression
This study presents a novel approach to addressing offline reinforcement learning (RL) problems by reframing them as regression tasks that can be effectively solved using Decision Trees. Mainly, we introduce two distinct frameworks: return-conditioned... | [
-0.06636025756597519,
0.006784731987863779,
-0.009145447053015232,
0.020465726032853127,
-0.028440110385417938,
-0.023365503177046776,
0.07126756757497787,
0.03752979263663292,
0.02656269073486328,
0.028142698109149933,
-0.03779003024101257,
-0.025633275508880615,
-0.0070542627945542336,
0... | |
corl_2024_eU5E0oTtpS | eU5E0oTtpS | corl | 2,024 | Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models | Large Language Models (LLM) have emerged as a tool for robots to generate task plans using common sense reasoning. For the LLM to generate actionable plans, scene context must be provided, often through a map. Recent works have shifted from explicit maps with fixed semantic classes to implicit open vocabulary maps base... | Mike Zhang;Kaixian Qu;Vaishakh Patil;Cesar Cadena;Marco Hutter | ETHZ - ETH Zurich;ETHZ - ETH Zurich;ETHZ - ETH Zurich;ETH Zurich;ETHZ - ETH Zurich | Poster | main | Scene Understanding;Large Language Models | https://openreview.net/forum?id=eU5E0oTtpS | 3 | Tag Map: A Text-Based Map for Spatial Reasoning and Navigation with Large Language Models
Large Language Models (LLM) have emerged as a tool for robots to generate task plans using common sense reasoning. For the LLM to generate actionable plans, scene context must be provided, often through a map. Recent works have sh... | [
-0.037604331970214844,
-0.020067963749170303,
0.010154093615710735,
-0.0037558136973530054,
-0.017628762871026993,
-0.05399501696228981,
0.00176241435110569,
-0.028383418917655945,
0.0013639655662700534,
-0.017970621585845947,
0.02001252770423889,
0.0012808110332116485,
0.020770156756043434,... | ||
corl_2024_edP2dmingV | edP2dmingV | corl | 2,024 | Large Scale Mapping of Indoor Magnetic Field by Local and Sparse Gaussian Processes | Magnetometer-based indoor navigation uses variations in the magnetic field to determine the robot's location. For that, a magnetic map of the environment has to be built beforehand from a collection of localized magnetic measurements. Existing solutions built on sparse Gaussian Process (GP) regression do not scale well... | Iad ABDUL-RAOUF;Vincent Gay-Bellile;Cyril JOLY;Steve Bourgeois;Alexis Paljic | Mines ParisTech;CEA;Mines ParisTech;CEA; | Poster | main | Gaussian process regression;magnetic field maps;indoor localization | https://openreview.net/forum?id=edP2dmingV | 0 | Large Scale Mapping of Indoor Magnetic Field by Local and Sparse Gaussian Processes
Magnetometer-based indoor navigation uses variations in the magnetic field to determine the robot's location. For that, a magnetic map of the environment has to be built beforehand from a collection of localized magnetic measurements. E... | [
-0.07735815644264221,
0.004099742509424686,
-0.03642986714839935,
-0.011135444976389408,
0.0022031220141798258,
-0.010748285800218582,
-0.02973753772675991,
-0.0567096509039402,
0.025128494948148727,
0.04181322827935219,
-0.05434982106089592,
0.028723549097776413,
0.04358309879899025,
0.00... | ||
corl_2024_eeoX7tCoK2 | eeoX7tCoK2 | corl | 2,024 | Shelf-Supervised Cross-Modal Pre-Training for 3D Object Detection | State-of-the-art 3D object detectors are often trained on massive labeled datasets. However, annotating 3D bounding boxes remains prohibitively expensive and time-consuming, particularly for LiDAR. Instead, recent works demonstrate that self-supervised pre-training with unlabeled data can improve detection accuracy wit... | Mehar Khurana;Neehar Peri;James Hays;Deva Ramanan | ;Carnegie Mellon University;Georgia Institute of Technology;School of Computer Science, Carnegie Mellon University | Poster | main | Shelf-Supervised 3D Object Detection;Vision-Language Models;Autonomous Vehicles | https://github.com/meharkhurana03/cm3d | https://openreview.net/forum?id=eeoX7tCoK2 | 0 | Shelf-Supervised Cross-Modal Pre-Training for 3D Object Detection
State-of-the-art 3D object detectors are often trained on massive labeled datasets. However, annotating 3D bounding boxes remains prohibitively expensive and time-consuming, particularly for LiDAR. Instead, recent works demonstrate that self-supervised p... | [
-0.0676657184958458,
-0.0389699749648571,
-0.0567425899207592,
0.004328244365751743,
0.003161128144711256,
0.021161308512091637,
0.0129869868978858,
0.02397320233285427,
0.00004397105658426881,
0.005682370159775019,
-0.027271771803498268,
-0.03940257057547569,
0.005569714121520519,
0.02532... | |
corl_2024_evCXwlCMIi | evCXwlCMIi | corl | 2,024 | Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models | Traditionally, model-based reinforcement learning (MBRL) methods exploit neural networks as flexible function approximators to represent $\textit{a priori}$ unknown environment dynamics. However, training data are typically scarce in practice, and these black-box models often fail to generalize. Modeling architectures ... | Jacob Levy;Tyler Westenbroek;David Fridovich-Keil | University of Texas at Austin;;University of Texas at Austin | Poster | main | Model-Based Reinforcement Learning;Physics-Based Models | https://github.com/CLeARoboticsLab/ssrl | https://openreview.net/forum?id=evCXwlCMIi | 6 | Learning to Walk from Three Minutes of Real-World Data with Semi-structured Dynamics Models
Traditionally, model-based reinforcement learning (MBRL) methods exploit neural networks as flexible function approximators to represent $\textit{a priori}$ unknown environment dynamics. However, training data are typically scar... | [
-0.06829247623682022,
-0.009999703615903854,
0.0011870571179315448,
0.020055323839187622,
-0.0176695603877306,
-0.03319566324353218,
0.019496161490678787,
0.01725950837135315,
-0.004582809284329414,
0.006439697463065386,
-0.04745432734489441,
-0.010959601029753685,
0.000427818187745288,
0.... | |
corl_2024_fC0wWeXsVm | fC0wWeXsVm | corl | 2,024 | Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning | We apply multi-agent deep reinforcement learning (RL) to train end-to-end robot soccer policies with fully onboard computation and sensing via egocentric RGB vision. This setting reflects many challenges of real-world robotics, including active perception, agile full-body control, and long-horizon planning in a dynamic... | Dhruva Tirumala;Markus Wulfmeier;Ben Moran;Sandy Huang;Jan Humplik;Guy Lever;Tuomas Haarnoja;Leonard Hasenclever;Arunkumar Byravan;Nathan Batchelor;Neil sreendra;Kushal Patel;Marlon Gwira;Francesco Nori;Martin Riedmiller;Nicolas Heess | Google DeepMind;Google DeepMind;Google DeepMind;Google DeepMind;;Google DeepMind;Google DeepMind;Google;Google;;;;Google DeepMind;;Google DeepMind;University College London | Poster | main | robotics;deep reinforcement learning | https://openreview.net/forum?id=fC0wWeXsVm | 13 | Learning Robot Soccer from Egocentric Vision with Deep Reinforcement Learning
We apply multi-agent deep reinforcement learning (RL) to train end-to-end robot soccer policies with fully onboard computation and sensing via egocentric RGB vision. This setting reflects many challenges of real-world robotics, including acti... | [
-0.040119677782058716,
-0.03510003536939621,
0.005183530040085316,
0.01070419978350401,
-0.0335267148911953,
0.0015218132175505161,
0.013326400890946388,
-0.007370258215814829,
0.032477833330631256,
0.0025589873548597097,
-0.016894467175006866,
-0.015124482102692127,
-0.022063950076699257,
... | ||
corl_2024_fCDOfpTCzZ | fCDOfpTCzZ | corl | 2,024 | InstructNav: Zero-shot System for Generic Instruction Navigation in Unexplored Environment | Enabling robots to navigate following diverse language instructions in unexplored environments is an attractive goal for human-robot interaction. However, this goal is challenging because different navigation tasks require different strategies. The scarcity of instruction navigation data hinders training an instruction... | Yuxing Long;Wenzhe Cai;Hongcheng Wang;Guanqi Zhan;Hao Dong | Beijing University of Posts and Telecommunications;Southeast University;Peking University;University of Oxford;Peking University | Poster | main | Generic Instruction Navigation;Zero-shot;Unexplored Environment | https://github.com/LYX0501/InstructNav | https://openreview.net/forum?id=fCDOfpTCzZ | 34 | InstructNav: Zero-shot System for Generic Instruction Navigation in Unexplored Environment
Enabling robots to navigate following diverse language instructions in unexplored environments is an attractive goal for human-robot interaction. However, this goal is challenging because different navigation tasks require differ... | [
-0.07333215326070786,
-0.03948939964175224,
-0.02756456844508648,
0.01488746702671051,
-0.014460253529250622,
-0.002904588356614113,
0.0028210030868649483,
-0.0029695993289351463,
-0.02462979592382908,
0.03412136808037758,
0.00045623633195646107,
-0.02540992572903633,
0.003266791347414255,
... | |
corl_2024_fDRO4NHEwZ | fDRO4NHEwZ | corl | 2,024 | VIRL: Self-Supervised Visual Graph Inverse Reinforcement Learning | Learning dense reward functions from unlabeled videos for reinforcement learning exhibits scalability due to the vast diversity and quantity of video resources. Recent works use visual features or graph abstractions in videos to measure task progress as rewards, which either deteriorate in unseen domains or capture spa... | Lei Huang;Weijia Cai;Zihan Zhu;Chen Feng;Helge Rhodin;Zhengbo Zou | University of British Columbia;University of British Columbia;;New York University;;Columbia University | Poster | main | Inverse Reinforcement Learning;Learning from Video;Graph Network | https://openreview.net/forum?id=fDRO4NHEwZ | 0 | VIRL: Self-Supervised Visual Graph Inverse Reinforcement Learning
Learning dense reward functions from unlabeled videos for reinforcement learning exhibits scalability due to the vast diversity and quantity of video resources. Recent works use visual features or graph abstractions in videos to measure task progress as ... | [
-0.12736858427524567,
-0.014147873967885971,
0.019241107627749443,
0.02535299025475979,
-0.0354074127972126,
0.04312271997332573,
0.019524065777659416,
0.012223762460052967,
0.010177036747336388,
-0.009205549955368042,
-0.025334125384688377,
-0.016336077824234962,
0.004748969338834286,
0.0... | ||
corl_2024_fIj88Tn3fc | fIj88Tn3fc | corl | 2,024 | ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning | Increasingly large robotics datasets are being collected to train larger foundation models in robotics. However, despite the fact that data selection has been of utmost importance to scaling in vision and natural language processing (NLP), little work in robotics has questioned what data such models should actually be ... | Joey Hejna;Chethan Anand Bhateja;Yichen Jiang;Karl Pertsch;Dorsa Sadigh | ;University of California, Berkeley;;Stanford University;Stanford University | Poster | main | Data Curation;Data Quality;Robot Imitation Learning | https://github.com/jhejna/remix | https://openreview.net/forum?id=fIj88Tn3fc | 16 | ReMix: Optimizing Data Mixtures for Large Scale Imitation Learning
Increasingly large robotics datasets are being collected to train larger foundation models in robotics. However, despite the fact that data selection has been of utmost importance to scaling in vision and natural language processing (NLP), little work i... | [
-0.03637629747390747,
-0.02056051604449749,
-0.0074389344081282616,
0.004261984955519438,
-0.04075322300195694,
0.014730744995176792,
0.03928198665380478,
0.029829299077391624,
0.01169632188975811,
0.018942156806588173,
-0.05656900256872177,
-0.031061457470059395,
-0.013728465884923935,
0.... | |
corl_2024_fNBbEgcfwO | fNBbEgcfwO | corl | 2,024 | Surgical Robot Transformer (SRT): Imitation Learning for Surgical Tasks | We explore whether surgical manipulation tasks can be learned on the da Vinci robot via imitation learning.
However, the da Vinci system presents unique challenges which hinder straight-forward implementation of imitation learning. Notably, its forward kinematics is inconsistent due to imprecise joint measurements, and... | Ji Woong Kim;Tony Z. Zhao;Samuel Schmidgall;Anton Deguet;Marin Kobilarov;Chelsea Finn;Axel Krieger | ;Stanford University;Advanced Micro Devices;;Johns Hopkins University;Google;Johns Hopkins University | Poster | main | Imitation Learning;Manipulation;Medical Robotics | https://openreview.net/forum?id=fNBbEgcfwO | 27 | Surgical Robot Transformer (SRT): Imitation Learning for Surgical Tasks
We explore whether surgical manipulation tasks can be learned on the da Vinci robot via imitation learning.
However, the da Vinci system presents unique challenges which hinder straight-forward implementation of imitation learning. Notably, its for... | [
-0.054735489189624786,
0.017847321927547455,
-0.03578716516494751,
0.023093270137906075,
-0.023518867790699005,
-0.010047792457044125,
-0.0011478400556370616,
0.02026212401688099,
0.005592901259660721,
0.017847321927547455,
0.00044352308032102883,
0.008340778760612011,
0.008465682156383991,
... | ||
corl_2024_fR1rCXjCQX | fR1rCXjCQX | corl | 2,024 | Learning Compositional Behaviors from Demonstration and Language | We introduce Behavior from Language and Demonstration (BLADE), a framework for long-horizon robotic manipulation by integrating imitation learning and model-based planning. BLADE leverages language-annotated demonstrations, extracts abstract action knowledge from large language models (LLMs), and constructs a library o... | Weiyu Liu;Neil Nie;Ruohan Zhang;Jiayuan Mao;Jiajun Wu | Stanford University;Stanford University;Stanford University;Massachusetts Institute of Technology;Stanford University | Poster | main | Manipulation;Planning Abstractions;Learning from Language | https://openreview.net/forum?id=fR1rCXjCQX | 3 | Learning Compositional Behaviors from Demonstration and Language
We introduce Behavior from Language and Demonstration (BLADE), a framework for long-horizon robotic manipulation by integrating imitation learning and model-based planning. BLADE leverages language-annotated demonstrations, extracts abstract action knowle... | [
0.008747434243559837,
-0.01392486784607172,
0.015008951537311077,
-0.0017347675748169422,
-0.006224135868251324,
-0.0023994697257876396,
0.029232878237962723,
0.01333609875291586,
-0.014130470342934132,
0.0049344501458108425,
-0.029120732098817825,
-0.015111752785742283,
-0.02508345432579517... | ||
corl_2024_fs7ia3FqUM | fs7ia3FqUM | corl | 2,024 | Humanoid Parkour Learning | Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement learning policy only to w... | Ziwen Zhuang;Shenzhe Yao;Hang Zhao | ShanghaiTech University;ShanghaiTech University;Tsinghua University | Poster | main | Humanoid Agile Locomotion;Visuomotor Control;Sim-to-Real Transfer | https://openreview.net/forum?id=fs7ia3FqUM | 38 | Humanoid Parkour Learning
Parkour is a grand challenge for legged locomotion, even for quadruped robots, requiring active perception and various maneuvers to overcome multiple challenging obstacles. Existing methods for humanoid locomotion either optimize a trajectory for a single parkour track or train a reinforcement... | [
-0.042955923825502396,
-0.01879788562655449,
-0.021403254941105843,
0.02898591011762619,
-0.008105595596134663,
-0.007101734634488821,
-0.05341475456953049,
-0.02730502560734749,
0.009991921484470367,
-0.01783604547381401,
-0.05729946494102478,
-0.009982583113014698,
0.020207960158586502,
... | ||
corl_2024_gqCQxObVz2 | gqCQxObVz2 | corl | 2,024 | 3D Diffuser Actor: Policy Diffusion with 3D Scene Representations | Diffusion policies are conditional diffusion models that learn robot action distributions conditioned on the robot and environment state. They have recently shown to outperform both deterministic and alternative action distribution learning formulations. 3D robot policies use 3D scene feature representations aggregated... | Tsung-Wei Ke;Nikolaos Gkanatsios;Katerina Fragkiadaki | Carnegie Mellon University;Carnegie Mellon University;Carnegie Mellon University | Poster | main | Diffusion models;3D representations;manipulation;imitation learning | https://github.com/nickgkan/3d_diffuser_actor | https://openreview.net/forum?id=gqCQxObVz2 | 117 | 3D Diffuser Actor: Policy Diffusion with 3D Scene Representations
Diffusion policies are conditional diffusion models that learn robot action distributions conditioned on the robot and environment state. They have recently shown to outperform both deterministic and alternative action distribution learning formulations.... | [
-0.0317641906440258,
-0.030105676501989365,
-0.019271237775683403,
0.0002790014259517193,
-0.010978656820952892,
0.012375776655972004,
-0.020370906218886375,
0.03522544354200363,
0.04654661938548088,
0.01699077896773815,
-0.0054938350804150105,
-0.004898932762444019,
-0.012348735705018044,
... | |
corl_2024_gqFIybpsLX | gqFIybpsLX | corl | 2,024 | Avoid Everything: Model-Free Collision Avoidance with Expert-Guided Fine-Tuning | The world is full of clutter. In order to operate effectively in uncontrolled, real world spaces, robots must navigate safely by executing tasks around obstacles while in proximity to hazards. Creating safe movement for robotic manipulators remains a long-standing challenge in robotics, particularly in environments wit... | Adam Fishman;Aaron Walsman;Mohak Bhardwaj;Wentao Yuan;Balakumar Sundaralingam;Byron Boots;Dieter Fox | University of Washington;Harvard University;;University of Washington, Seattle;NVIDIA;;Department of Computer Science | Poster | main | Imitation Learning;Robotics;Collision Avoidance;Fine Tuning;Motion Planning | https://github.com/fishbotics/avoid-everything | https://openreview.net/forum?id=gqFIybpsLX | 3 | Avoid Everything: Model-Free Collision Avoidance with Expert-Guided Fine-Tuning
The world is full of clutter. In order to operate effectively in uncontrolled, real world spaces, robots must navigate safely by executing tasks around obstacles while in proximity to hazards. Creating safe movement for robotic manipulators... | [
-0.041322194039821625,
-0.024540938436985016,
-0.01223280094563961,
0.029946349561214447,
-0.009515970014035702,
-0.014935506507754326,
-0.039589449763298035,
0.016762422397732735,
0.02079293690621853,
0.02100011333823204,
-0.03947644680738449,
0.0025543859228491783,
0.0041105602867901325,
... | |
corl_2024_gvdXE7ikHI | gvdXE7ikHI | corl | 2,024 | ALOHA Unleashed: A Simple Recipe for Robot Dexterity | Recent work has shown promising results for learning end-to-end robot policies using imitation learning. In this work we address the question of how far can we push imitation learning for challenging dexterous manipulation tasks. We show that a simple recipe of large scale data collection on the ALOHA 2 platform, combi... | Tony Z. Zhao;Jonathan Tompson;Danny Driess;Pete Florence;Seyed Kamyar Seyed Ghasemipour;Chelsea Finn;Ayzaan Wahid | Stanford University;Google DeepMind;Google;Google;Google DeepMind Robotics;Google;Robotics at Google | Poster | main | Imitation Learning;Manipulation | https://openreview.net/forum?id=gvdXE7ikHI | 83 | ALOHA Unleashed: A Simple Recipe for Robot Dexterity
Recent work has shown promising results for learning end-to-end robot policies using imitation learning. In this work we address the question of how far can we push imitation learning for challenging dexterous manipulation tasks. We show that a simple recipe of large... | [
-0.007099504582583904,
-0.004870918579399586,
0.018825441598892212,
0.05130450055003166,
-0.048257824033498764,
-0.026272868737578392,
0.009892290458083153,
0.03214053809642792,
0.004532399121671915,
-0.005905283614993095,
-0.024241752922534943,
-0.0017737008165568113,
0.012130280025303364,
... | ||
corl_2024_hV97HJm7Ag | hV97HJm7Ag | corl | 2,024 | Task-Oriented Hierarchical Object Decomposition for Visuomotor Control | Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, these representations cannot effectively ignore any task-irrelevant inf... | Jianing Qian;Yunshuang Li;Bernadette Bucher;Dinesh Jayaraman | School of Engineering and Applied Science, University of Pennsylvania;University of Pennsylvania;Boston Dynamics AI Institute;University of Pennsylvania | Poster | main | Visual Representations;Entities;Imitation;Manipulation | https://openreview.net/forum?id=hV97HJm7Ag | 0 | Task-Oriented Hierarchical Object Decomposition for Visuomotor Control
Good pre-trained visual representations could enable robots to learn visuomotor policy efficiently. Still, existing representations take a one-size-fits-all-tasks approach that comes with two important drawbacks: (1) Being completely task-agnostic, ... | [
-0.007889959029853344,
-0.04873425513505936,
-0.02299097180366516,
0.057615045458078384,
-0.017642313614487648,
-0.012229436077177525,
0.022440509870648384,
0.01682579517364502,
0.019064342603087425,
0.025174472481012344,
-0.03478921204805374,
-0.043303027749061584,
0.019614804536104202,
0... | ||
corl_2024_iZF0FRPgfq | iZF0FRPgfq | corl | 2,024 | I Can Tell What I am Doing: Toward Real-World Natural Language Grounding of Robot Experiences | Understanding robot behaviors and experiences through natural language is crucial for developing intelligent and transparent robotic systems. Recent advancement in large language models (LLMs) makes it possible to translate complex, multi-modal robotic experiences into coherent, human-readable narratives. However, grou... | Zihan Wang;Brian Liang;Varad Dhat;Zander Brumbaugh;Nick Walker;Ranjay Krishna;Maya Cakmak | University of Washington;University of Washington;University of Washington;Department of Computer Science;University of Washington;University of Washington;University of Washington, Seattle | Poster | main | Large Language Model;Explainable AI;Failure Analysis | https://openreview.net/forum?id=iZF0FRPgfq | 4 | I Can Tell What I am Doing: Toward Real-World Natural Language Grounding of Robot Experiences
Understanding robot behaviors and experiences through natural language is crucial for developing intelligent and transparent robotic systems. Recent advancement in large language models (LLMs) makes it possible to translate co... | [
-0.06873918324708939,
-0.014285379089415073,
0.008721260353922844,
0.0011850789887830615,
-0.06369511783123016,
-0.0009031912195496261,
-0.008679840713739395,
0.019826484844088554,
-0.015491168014705181,
0.011137440800666809,
-0.019771259278059006,
0.0142393559217453,
-0.03304414451122284,
... | ||
corl_2024_itKJ5uu1gW | itKJ5uu1gW | corl | 2,024 | Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling | Videos of robots interacting with objects encode rich information about the objects' dynamics. However, existing video prediction approaches typically do not explicitly account for the 3D information from videos, such as robot actions and objects' 3D states, limiting their use in real-world robotic applications. In thi... | Mingtong Zhang;Kaifeng Zhang;Yunzhu Li | University of Illinois, Urbana Champaign;University of Illinois Urbana-Champaign;University of Illinois Urbana-Champaign | Poster | main | Dynamics Model;3D Gaussian Splatting;Action-Conditioned Video Prediction;Model-Based Planning | https://openreview.net/forum?id=itKJ5uu1gW | 5 | Dynamic 3D Gaussian Tracking for Graph-Based Neural Dynamics Modeling
Videos of robots interacting with objects encode rich information about the objects' dynamics. However, existing video prediction approaches typically do not explicitly account for the 3D information from videos, such as robot actions and objects' 3D... | [
-0.05821638181805611,
-0.04830874875187874,
-0.013406039215624332,
0.010784493759274483,
-0.028529642149806023,
-0.022418729960918427,
0.0024882080033421516,
-0.03413432464003563,
-0.04570528119802475,
0.026721680536866188,
-0.03746097534894943,
-0.007900794968008995,
0.04346340894699097,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.