Title string | Abstract string | Status string | User string | text string | label int64 | combined_text string | __index_level_0__ int64 |
|---|---|---|---|---|---|---|---|
Perception in Plan: Coupled Perception and Planning for End-to-End Autonomous Driving | End-to-end autonomous driving has achieved remarkable advancements in recent
years. Existing methods primarily follow a perception-planning paradigm, where
perception and planning are executed sequentially within a fully differentiable
framework for planning-oriented optimization. We further advance this paradigm
through a perception-in-plan framework design, which integrates perception into
the planning process. This design facilitates targeted perception guided by
evolving planning objectives over time, ultimately enhancing planning
performance. Building on this insight, we introduce VeteranAD, a coupled
perception and planning framework for end-to-end autonomous driving. By
incorporating multi-mode anchored trajectories as planning priors, the
perception module is specifically designed to gather traffic elements along
these trajectories, enabling comprehensive and targeted perception. Planning
trajectories are then generated based on both the perception results and the
planning priors. To make perception fully serve planning, we adopt an
autoregressive strategy that progressively predicts future trajectories while
focusing on relevant regions for targeted perception at each step. With this
simple yet effective design, VeteranAD fully unleashes the potential of
planning-oriented end-to-end methods, leading to more accurate and reliable
driving behavior. Extensive experiments on the NAVSIM and Bench2Drive datasets
demonstrate that our VeteranAD achieves state-of-the-art performance. | Liked | zrz@andrew.cmu.edu | Perception in Plan: Coupled Perception and Planning for End-to-End Autonomous Driving : End-to-end autonomous driving has achieved remarkable advancements in recent
years. Existing methods primarily follow a perception-planning paradigm, where
perception and planning are executed sequentially within a fully differentiable
framework for planning-oriented optimization. We further advance this paradigm
through a perception-in-plan framework design, which integrates perception into
the planning process. This design facilitates targeted perception guided by
evolving planning objectives over time, ultimately enhancing planning
performance. Building on this insight, we introduce VeteranAD, a coupled
perception and planning framework for end-to-end autonomous driving. By
incorporating multi-mode anchored trajectories as planning priors, the
perception module is specifically designed to gather traffic elements along
these trajectories, enabling comprehensive and targeted perception. Planning
trajectories are then generated based on both the perception results and the
planning priors. To make perception fully serve planning, we adopt an
autoregressive strategy that progressively predicts future trajectories while
focusing on relevant regions for targeted perception at each step. With this
simple yet effective design, VeteranAD fully unleashes the potential of
planning-oriented end-to-end methods, leading to more accurate and reliable
driving behavior. Extensive experiments on the NAVSIM and Bench2Drive datasets
demonstrate that our VeteranAD achieves state-of-the-art performance. | 1 | zrz@andrew.cmu.edu [SEP] Perception in Plan: Coupled Perception and Planning for End-to-End Autonomous Driving : End-to-end autonomous driving has achieved remarkable advancements in recent
years. Existing methods primarily follow a perception-planning paradigm, where
perception and planning are executed sequentially within a fully differentiable
framework for planning-oriented optimization. We further advance this paradigm
through a perception-in-plan framework design, which integrates perception into
the planning process. This design facilitates targeted perception guided by
evolving planning objectives over time, ultimately enhancing planning
performance. Building on this insight, we introduce VeteranAD, a coupled
perception and planning framework for end-to-end autonomous driving. By
incorporating multi-mode anchored trajectories as planning priors, the
perception module is specifically designed to gather traffic elements along
these trajectories, enabling comprehensive and targeted perception. Planning
trajectories are then generated based on both the perception results and the
planning priors. To make perception fully serve planning, we adopt an
autoregressive strategy that progressively predicts future trajectories while
focusing on relevant regions for targeted perception at each step. With this
simple yet effective design, VeteranAD fully unleashes the potential of
planning-oriented end-to-end methods, leading to more accurate and reliable
driving behavior. Extensive experiments on the NAVSIM and Bench2Drive datasets
demonstrate that our VeteranAD achieves state-of-the-art performance. | 271 |
ETA-IK: Execution-Time-Aware Inverse Kinematics for Dual-Arm Systems | This paper presents ETA-IK, a novel Execution-Time-Aware Inverse Kinematics
method tailored for dual-arm robotic systems. The primary goal is to optimize
motion execution time by leveraging the redundancy of both arms, specifically
in tasks where only the relative pose of the robots is constrained, such as
dual-arm scanning of unknown objects. Unlike traditional inverse kinematics
methods that use surrogate metrics such as joint configuration distance, our
method incorporates direct motion execution time and implicit collisions into
the optimization process, thereby finding target joints that allow subsequent
trajectory generation to get more efficient and collision-free motion. A neural
network based execution time approximator is employed to predict time-efficient
joint configurations while accounting for potential collisions. Through
experimental evaluation on a system composed of a UR5 and a KUKA iiwa robot, we
demonstrate significant reductions in execution time. The proposed method
outperforms conventional approaches, showing improved motion efficiency without
sacrificing positioning accuracy. These results highlight the potential of
ETA-IK to improve the performance of dual-arm systems in applications, where
efficiency and safety are paramount. | Liked | jechoi@andrew.cmu.edu | ETA-IK: Execution-Time-Aware Inverse Kinematics for Dual-Arm Systems : This paper presents ETA-IK, a novel Execution-Time-Aware Inverse Kinematics
method tailored for dual-arm robotic systems. The primary goal is to optimize
motion execution time by leveraging the redundancy of both arms, specifically
in tasks where only the relative pose of the robots is constrained, such as
dual-arm scanning of unknown objects. Unlike traditional inverse kinematics
methods that use surrogate metrics such as joint configuration distance, our
method incorporates direct motion execution time and implicit collisions into
the optimization process, thereby finding target joints that allow subsequent
trajectory generation to get more efficient and collision-free motion. A neural
network based execution time approximator is employed to predict time-efficient
joint configurations while accounting for potential collisions. Through
experimental evaluation on a system composed of a UR5 and a KUKA iiwa robot, we
demonstrate significant reductions in execution time. The proposed method
outperforms conventional approaches, showing improved motion efficiency without
sacrificing positioning accuracy. These results highlight the potential of
ETA-IK to improve the performance of dual-arm systems in applications, where
efficiency and safety are paramount. | 1 | jechoi@andrew.cmu.edu [SEP] ETA-IK: Execution-Time-Aware Inverse Kinematics for Dual-Arm Systems : This paper presents ETA-IK, a novel Execution-Time-Aware Inverse Kinematics
method tailored for dual-arm robotic systems. The primary goal is to optimize
motion execution time by leveraging the redundancy of both arms, specifically
in tasks where only the relative pose of the robots is constrained, such as
dual-arm scanning of unknown objects. Unlike traditional inverse kinematics
methods that use surrogate metrics such as joint configuration distance, our
method incorporates direct motion execution time and implicit collisions into
the optimization process, thereby finding target joints that allow subsequent
trajectory generation to get more efficient and collision-free motion. A neural
network based execution time approximator is employed to predict time-efficient
joint configurations while accounting for potential collisions. Through
experimental evaluation on a system composed of a UR5 and a KUKA iiwa robot, we
demonstrate significant reductions in execution time. The proposed method
outperforms conventional approaches, showing improved motion efficiency without
sacrificing positioning accuracy. These results highlight the potential of
ETA-IK to improve the performance of dual-arm systems in applications, where
efficiency and safety are paramount. | 539 |
Transformer-based deep imitation learning for dual-arm robot manipulation | Deep imitation learning is promising for solving dexterous manipulation tasks
because it does not require an environment model and pre-programmed robot
behavior. However, its application to dual-arm manipulation tasks remains
challenging. In a dual-arm manipulation setup, the increased number of state
dimensions caused by the additional robot manipulators causes distractions and
results in poor performance of the neural networks. We address this issue using
a self-attention mechanism that computes dependencies between elements in a
sequential input and focuses on important elements. A Transformer, a variant of
self-attention architecture, is applied to deep imitation learning to solve
dual-arm manipulation tasks in the real world. The proposed method has been
tested on dual-arm manipulation tasks using a real robot. The experimental
results demonstrated that the Transformer-based deep imitation learning
architecture can attend to the important features among the sensory inputs,
therefore reducing distractions and improving manipulation performance when
compared with the baseline architecture without the self-attention mechanisms.
Data from this and related works are available at:
https://sites.google.com/view/multi-task-fine. | Liked | jechoi@andrew.cmu.edu | Transformer-based deep imitation learning for dual-arm robot manipulation : Deep imitation learning is promising for solving dexterous manipulation tasks
because it does not require an environment model and pre-programmed robot
behavior. However, its application to dual-arm manipulation tasks remains
challenging. In a dual-arm manipulation setup, the increased number of state
dimensions caused by the additional robot manipulators causes distractions and
results in poor performance of the neural networks. We address this issue using
a self-attention mechanism that computes dependencies between elements in a
sequential input and focuses on important elements. A Transformer, a variant of
self-attention architecture, is applied to deep imitation learning to solve
dual-arm manipulation tasks in the real world. The proposed method has been
tested on dual-arm manipulation tasks using a real robot. The experimental
results demonstrated that the Transformer-based deep imitation learning
architecture can attend to the important features among the sensory inputs,
therefore reducing distractions and improving manipulation performance when
compared with the baseline architecture without the self-attention mechanisms.
Data from this and related works are available at:
https://sites.google.com/view/multi-task-fine. | 1 | jechoi@andrew.cmu.edu [SEP] Transformer-based deep imitation learning for dual-arm robot manipulation : Deep imitation learning is promising for solving dexterous manipulation tasks
because it does not require an environment model and pre-programmed robot
behavior. However, its application to dual-arm manipulation tasks remains
challenging. In a dual-arm manipulation setup, the increased number of state
dimensions caused by the additional robot manipulators causes distractions and
results in poor performance of the neural networks. We address this issue using
a self-attention mechanism that computes dependencies between elements in a
sequential input and focuses on important elements. A Transformer, a variant of
self-attention architecture, is applied to deep imitation learning to solve
dual-arm manipulation tasks in the real world. The proposed method has been
tested on dual-arm manipulation tasks using a real robot. The experimental
results demonstrated that the Transformer-based deep imitation learning
architecture can attend to the important features among the sensory inputs,
therefore reducing distractions and improving manipulation performance when
compared with the baseline architecture without the self-attention mechanisms.
Data from this and related works are available at:
https://sites.google.com/view/multi-task-fine. | 464 |
Validation of a Control Algorithm for Human-like Reaching Motion using 7-DOF Arm and 19-DOF Hand-Arm Systems | This technical report gives an overview of our work on control algorithms
dealing with redundant robot systems for achieving human-like motion
characteristics. Previously, we developed a novel control law to exhibit
human-motion characteristics in redundant robot arm systems as well as
arm-trunk systems for reaching tasks [1], [2]. This newly developed method
nullifies the need for the computation of pseudo-inverse of Jacobian while the
formulation and optimization of any artificial performance index is not
necessary. The time-varying properties of the muscle stiffness and damping as
well as the low-pass filter characteristics of human muscles have been modeled
by the proposed control law to generate human-motion characteristics for
reaching motion like quasi-straight line trajectory of the end-effector and
symmetric bell shaped velocity profile. This report focuses on the experiments
performed using a 7-DOF redundant robot-arm system which proved the
effectiveness of this algorithm in imitating human-like motion characteristics.
In addition, we extended this algorithm to a 19-DOF Hand-Arm System for a
reach-to-grasp task. Simulations using the 19-DOF Hand-Arm System show the
effectiveness of the proposed scheme for effective human-like hand-arm
coordination in reach-to-grasp tasks for pinch and envelope grasps on objects
of different shapes such as a box, a cylinder, and a sphere. | Disliked | jechoi@andrew.cmu.edu | Validation of a Control Algorithm for Human-like Reaching Motion using 7-DOF Arm and 19-DOF Hand-Arm Systems : This technical report gives an overview of our work on control algorithms
dealing with redundant robot systems for achieving human-like motion
characteristics. Previously, we developed a novel control law to exhibit
human-motion characteristics in redundant robot arm systems as well as
arm-trunk systems for reaching tasks [1], [2]. This newly developed method
nullifies the need for the computation of pseudo-inverse of Jacobian while the
formulation and optimization of any artificial performance index is not
necessary. The time-varying properties of the muscle stiffness and damping as
well as the low-pass filter characteristics of human muscles have been modeled
by the proposed control law to generate human-motion characteristics for
reaching motion like quasi-straight line trajectory of the end-effector and
symmetric bell shaped velocity profile. This report focuses on the experiments
performed using a 7-DOF redundant robot-arm system which proved the
effectiveness of this algorithm in imitating human-like motion characteristics.
In addition, we extended this algorithm to a 19-DOF Hand-Arm System for a
reach-to-grasp task. Simulations using the 19-DOF Hand-Arm System show the
effectiveness of the proposed scheme for effective human-like hand-arm
coordination in reach-to-grasp tasks for pinch and envelope grasps on objects
of different shapes such as a box, a cylinder, and a sphere. | 0 | jechoi@andrew.cmu.edu [SEP] Validation of a Control Algorithm for Human-like Reaching Motion using 7-DOF Arm and 19-DOF Hand-Arm Systems : This technical report gives an overview of our work on control algorithms
dealing with redundant robot systems for achieving human-like motion
characteristics. Previously, we developed a novel control law to exhibit
human-motion characteristics in redundant robot arm systems as well as
arm-trunk systems for reaching tasks [1], [2]. This newly developed method
nullifies the need for the computation of pseudo-inverse of Jacobian while the
formulation and optimization of any artificial performance index is not
necessary. The time-varying properties of the muscle stiffness and damping as
well as the low-pass filter characteristics of human muscles have been modeled
by the proposed control law to generate human-motion characteristics for
reaching motion like quasi-straight line trajectory of the end-effector and
symmetric bell shaped velocity profile. This report focuses on the experiments
performed using a 7-DOF redundant robot-arm system which proved the
effectiveness of this algorithm in imitating human-like motion characteristics.
In addition, we extended this algorithm to a 19-DOF Hand-Arm System for a
reach-to-grasp task. Simulations using the 19-DOF Hand-Arm System show the
effectiveness of the proposed scheme for effective human-like hand-arm
coordination in reach-to-grasp tasks for pinch and envelope grasps on objects
of different shapes such as a box, a cylinder, and a sphere. | 481 |
Using Deep Learning and Machine Learning to Detect Epileptic Seizure with Electroencephalography (EEG) Data | The prediction of epileptic seizure has always been extremely challenging in
medical domain. However, as the development of computer technology, the
application of machine learning introduced new ideas for seizure forecasting.
Applying machine learning model onto the predication of epileptic seizure could
help us obtain a better result and there have been plenty of scientists who
have been doing such works so that there are sufficient medical data provided
for researchers to do training of machine learning models. | Liked | zrz@andrew.cmu.edu | Using Deep Learning and Machine Learning to Detect Epileptic Seizure with Electroencephalography (EEG) Data : The prediction of epileptic seizure has always been extremely challenging in
medical domain. However, as the development of computer technology, the
application of machine learning introduced new ideas for seizure forecasting.
Applying machine learning model onto the predication of epileptic seizure could
help us obtain a better result and there have been plenty of scientists who
have been doing such works so that there are sufficient medical data provided
for researchers to do training of machine learning models. | 1 | zrz@andrew.cmu.edu [SEP] Using Deep Learning and Machine Learning to Detect Epileptic Seizure with Electroencephalography (EEG) Data : The prediction of epileptic seizure has always been extremely challenging in
medical domain. However, as the development of computer technology, the
application of machine learning introduced new ideas for seizure forecasting.
Applying machine learning model onto the predication of epileptic seizure could
help us obtain a better result and there have been plenty of scientists who
have been doing such works so that there are sufficient medical data provided
for researchers to do training of machine learning models. | 102 |
Learning Multi-Arm Manipulation Through Collaborative Teleoperation | Imitation Learning (IL) is a powerful paradigm to teach robots to perform
manipulation tasks by allowing them to learn from human demonstrations
collected via teleoperation, but has mostly been limited to single-arm
manipulation. However, many real-world tasks require multiple arms, such as
lifting a heavy object or assembling a desk. Unfortunately, applying IL to
multi-arm manipulation tasks has been challenging -- asking a human to control
more than one robotic arm can impose significant cognitive burden and is often
only possible for a maximum of two robot arms. To address these challenges, we
present Multi-Arm RoboTurk (MART), a multi-user data collection platform that
allows multiple remote users to simultaneously teleoperate a set of robotic
arms and collect demonstrations for multi-arm tasks. Using MART, we collected
demonstrations for five novel two and three-arm tasks from several
geographically separated users. From our data we arrived at a critical insight:
most multi-arm tasks do not require global coordination throughout its full
duration, but only during specific moments. We show that learning from such
data consequently presents challenges for centralized agents that directly
attempt to model all robot actions simultaneously, and perform a comprehensive
study of different policy architectures with varying levels of centralization
on our tasks. Finally, we propose and evaluate a base-residual policy framework
that allows trained policies to better adapt to the mixed coordination setting
common in multi-arm manipulation, and show that a centralized policy augmented
with a decentralized residual model outperforms all other models on our set of
benchmark tasks. Additional results and videos at
https://roboturk.stanford.edu/multiarm . | Disliked | jechoi@andrew.cmu.edu | Learning Multi-Arm Manipulation Through Collaborative Teleoperation : Imitation Learning (IL) is a powerful paradigm to teach robots to perform
manipulation tasks by allowing them to learn from human demonstrations
collected via teleoperation, but has mostly been limited to single-arm
manipulation. However, many real-world tasks require multiple arms, such as
lifting a heavy object or assembling a desk. Unfortunately, applying IL to
multi-arm manipulation tasks has been challenging -- asking a human to control
more than one robotic arm can impose significant cognitive burden and is often
only possible for a maximum of two robot arms. To address these challenges, we
present Multi-Arm RoboTurk (MART), a multi-user data collection platform that
allows multiple remote users to simultaneously teleoperate a set of robotic
arms and collect demonstrations for multi-arm tasks. Using MART, we collected
demonstrations for five novel two and three-arm tasks from several
geographically separated users. From our data we arrived at a critical insight:
most multi-arm tasks do not require global coordination throughout its full
duration, but only during specific moments. We show that learning from such
data consequently presents challenges for centralized agents that directly
attempt to model all robot actions simultaneously, and perform a comprehensive
study of different policy architectures with varying levels of centralization
on our tasks. Finally, we propose and evaluate a base-residual policy framework
that allows trained policies to better adapt to the mixed coordination setting
common in multi-arm manipulation, and show that a centralized policy augmented
with a decentralized residual model outperforms all other models on our set of
benchmark tasks. Additional results and videos at
https://roboturk.stanford.edu/multiarm . | 0 | jechoi@andrew.cmu.edu [SEP] Learning Multi-Arm Manipulation Through Collaborative Teleoperation : Imitation Learning (IL) is a powerful paradigm to teach robots to perform
manipulation tasks by allowing them to learn from human demonstrations
collected via teleoperation, but has mostly been limited to single-arm
manipulation. However, many real-world tasks require multiple arms, such as
lifting a heavy object or assembling a desk. Unfortunately, applying IL to
multi-arm manipulation tasks has been challenging -- asking a human to control
more than one robotic arm can impose significant cognitive burden and is often
only possible for a maximum of two robot arms. To address these challenges, we
present Multi-Arm RoboTurk (MART), a multi-user data collection platform that
allows multiple remote users to simultaneously teleoperate a set of robotic
arms and collect demonstrations for multi-arm tasks. Using MART, we collected
demonstrations for five novel two and three-arm tasks from several
geographically separated users. From our data we arrived at a critical insight:
most multi-arm tasks do not require global coordination throughout its full
duration, but only during specific moments. We show that learning from such
data consequently presents challenges for centralized agents that directly
attempt to model all robot actions simultaneously, and perform a comprehensive
study of different policy architectures with varying levels of centralization
on our tasks. Finally, we propose and evaluate a base-residual policy framework
that allows trained policies to better adapt to the mixed coordination setting
common in multi-arm manipulation, and show that a centralized policy augmented
with a decentralized residual model outperforms all other models on our set of
benchmark tasks. Additional results and videos at
https://roboturk.stanford.edu/multiarm . | 22 |
Automated Graph Machine Learning: Approaches, Libraries, Benchmarks and Directions | Graph machine learning has been extensively studied in both academic and
industry. However, as the literature on graph learning booms with a vast number
of emerging methods and techniques, it becomes increasingly difficult to
manually design the optimal machine learning algorithm for different
graph-related tasks. To tackle the challenge, automated graph machine learning,
which aims at discovering the best hyper-parameter and neural architecture
configuration for different graph tasks/data without manual design, is gaining
an increasing number of attentions from the research community. In this paper,
we extensively discuss automated graph machine learning approaches, covering
hyper-parameter optimization (HPO) and neural architecture search (NAS) for
graph machine learning. We briefly overview existing libraries designed for
either graph machine learning or automated machine learning respectively, and
further in depth introduce AutoGL, our dedicated and the world's first
open-source library for automated graph machine learning. Also, we describe a
tailored benchmark that supports unified, reproducible, and efficient
evaluations. Last but not least, we share our insights on future research
directions for automated graph machine learning. This paper is the first
systematic and comprehensive discussion of approaches, libraries as well as
directions for automated graph machine learning. | Liked | zrz@andrew.cmu.edu | Automated Graph Machine Learning: Approaches, Libraries, Benchmarks and Directions : Graph machine learning has been extensively studied in both academic and
industry. However, as the literature on graph learning booms with a vast number
of emerging methods and techniques, it becomes increasingly difficult to
manually design the optimal machine learning algorithm for different
graph-related tasks. To tackle the challenge, automated graph machine learning,
which aims at discovering the best hyper-parameter and neural architecture
configuration for different graph tasks/data without manual design, is gaining
an increasing number of attentions from the research community. In this paper,
we extensively discuss automated graph machine learning approaches, covering
hyper-parameter optimization (HPO) and neural architecture search (NAS) for
graph machine learning. We briefly overview existing libraries designed for
either graph machine learning or automated machine learning respectively, and
further in depth introduce AutoGL, our dedicated and the world's first
open-source library for automated graph machine learning. Also, we describe a
tailored benchmark that supports unified, reproducible, and efficient
evaluations. Last but not least, we share our insights on future research
directions for automated graph machine learning. This paper is the first
systematic and comprehensive discussion of approaches, libraries as well as
directions for automated graph machine learning. | 1 | zrz@andrew.cmu.edu [SEP] Automated Graph Machine Learning: Approaches, Libraries, Benchmarks and Directions : Graph machine learning has been extensively studied in both academic and
industry. However, as the literature on graph learning booms with a vast number
of emerging methods and techniques, it becomes increasingly difficult to
manually design the optimal machine learning algorithm for different
graph-related tasks. To tackle the challenge, automated graph machine learning,
which aims at discovering the best hyper-parameter and neural architecture
configuration for different graph tasks/data without manual design, is gaining
an increasing number of attentions from the research community. In this paper,
we extensively discuss automated graph machine learning approaches, covering
hyper-parameter optimization (HPO) and neural architecture search (NAS) for
graph machine learning. We briefly overview existing libraries designed for
either graph machine learning or automated machine learning respectively, and
further in depth introduce AutoGL, our dedicated and the world's first
open-source library for automated graph machine learning. Also, we describe a
tailored benchmark that supports unified, reproducible, and efficient
evaluations. Last but not least, we share our insights on future research
directions for automated graph machine learning. This paper is the first
systematic and comprehensive discussion of approaches, libraries as well as
directions for automated graph machine learning. | 139 |
Automatic Design of Task-specific Robotic Arms | We present an interactive, computational design system for creating custom
robotic arms given high-level task descriptions and environmental constraints.
Various task requirements can be encoded as desired motion trajectories for the
robot arm's end-effector. Given such end-effector trajectories, our system
enables on-demand design of custom robot arms using a library of modular and
reconfigurable parts such as actuators and connecting links. By searching
through the combinatorial set of possible arrangements of these parts, our
method generates a functional, as-simple-as-possible robot arm that is capable
of tracking the desired trajectories. We demonstrate our system's capabilities
by creating robot arm designs in simulation, for various trajectory following
scenarios. | Liked | jechoi@andrew.cmu.edu | Automatic Design of Task-specific Robotic Arms : We present an interactive, computational design system for creating custom
robotic arms given high-level task descriptions and environmental constraints.
Various task requirements can be encoded as desired motion trajectories for the
robot arm's end-effector. Given such end-effector trajectories, our system
enables on-demand design of custom robot arms using a library of modular and
reconfigurable parts such as actuators and connecting links. By searching
through the combinatorial set of possible arrangements of these parts, our
method generates a functional, as-simple-as-possible robot arm that is capable
of tracking the desired trajectories. We demonstrate our system's capabilities
by creating robot arm designs in simulation, for various trajectory following
scenarios. | 1 | jechoi@andrew.cmu.edu [SEP] Automatic Design of Task-specific Robotic Arms : We present an interactive, computational design system for creating custom
robotic arms given high-level task descriptions and environmental constraints.
Various task requirements can be encoded as desired motion trajectories for the
robot arm's end-effector. Given such end-effector trajectories, our system
enables on-demand design of custom robot arms using a library of modular and
reconfigurable parts such as actuators and connecting links. By searching
through the combinatorial set of possible arrangements of these parts, our
method generates a functional, as-simple-as-possible robot arm that is capable
of tracking the desired trajectories. We demonstrate our system's capabilities
by creating robot arm designs in simulation, for various trajectory following
scenarios. | 7 |
Spatial Transfer Learning with Simple MLP | First step to investigate the potential of transfer learning applied to the
field of spatial statistics | Disliked | zrz@andrew.cmu.edu | Spatial Transfer Learning with Simple MLP : First step to investigate the potential of transfer learning applied to the
field of spatial statistics | 0 | zrz@andrew.cmu.edu [SEP] Spatial Transfer Learning with Simple MLP : First step to investigate the potential of transfer learning applied to the
field of spatial statistics | 57 |
Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers | In this paper, we propose a new data poisoning attack and apply it to deep
reinforcement learning agents. Our attack centers on what we call
in-distribution triggers, which are triggers native to the data distributions
the model will be trained on and deployed in. We outline a simple procedure for
embedding these, and other, triggers in deep reinforcement learning agents
following a multi-task learning paradigm, and demonstrate in three common
reinforcement learning environments. We believe that this work has important
implications for the security of deep learning models. | Liked | zrz@andrew.cmu.edu | Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers : In this paper, we propose a new data poisoning attack and apply it to deep
reinforcement learning agents. Our attack centers on what we call
in-distribution triggers, which are triggers native to the data distributions
the model will be trained on and deployed in. We outline a simple procedure for
embedding these, and other, triggers in deep reinforcement learning agents
following a multi-task learning paradigm, and demonstrate in three common
reinforcement learning environments. We believe that this work has important
implications for the security of deep learning models. | 1 | zrz@andrew.cmu.edu [SEP] Poisoning Deep Reinforcement Learning Agents with In-Distribution Triggers : In this paper, we propose a new data poisoning attack and apply it to deep
reinforcement learning agents. Our attack centers on what we call
in-distribution triggers, which are triggers native to the data distributions
the model will be trained on and deployed in. We outline a simple procedure for
embedding these, and other, triggers in deep reinforcement learning agents
following a multi-task learning paradigm, and demonstrate in three common
reinforcement learning environments. We believe that this work has important
implications for the security of deep learning models. | 258 |
The ATLAS of Traffic Lights: A Reliable Perception Framework for Autonomous Driving | Traffic light perception is an essential component of the camera-based
perception system for autonomous vehicles, enabling accurate detection and
interpretation of traffic lights to ensure safe navigation through complex
urban environments. In this work, we propose a modularized perception framework
that integrates state-of-the-art detection models with a novel real-time
association and decision framework, enabling seamless deployment into an
autonomous driving stack. To address the limitations of existing public
datasets, we introduce the ATLAS dataset, which provides comprehensive
annotations of traffic light states and pictograms across diverse environmental
conditions and camera setups. This dataset is publicly available at
https://url.fzi.de/ATLAS. We train and evaluate several state-of-the-art
traffic light detection architectures on ATLAS, demonstrating significant
performance improvements in both accuracy and robustness. Finally, we evaluate
the framework in real-world scenarios by deploying it in an autonomous vehicle
to make decisions at traffic light-controlled intersections, highlighting its
reliability and effectiveness for real-time operation. | Disliked | zrz@andrew.cmu.edu | The ATLAS of Traffic Lights: A Reliable Perception Framework for Autonomous Driving : Traffic light perception is an essential component of the camera-based
perception system for autonomous vehicles, enabling accurate detection and
interpretation of traffic lights to ensure safe navigation through complex
urban environments. In this work, we propose a modularized perception framework
that integrates state-of-the-art detection models with a novel real-time
association and decision framework, enabling seamless deployment into an
autonomous driving stack. To address the limitations of existing public
datasets, we introduce the ATLAS dataset, which provides comprehensive
annotations of traffic light states and pictograms across diverse environmental
conditions and camera setups. This dataset is publicly available at
https://url.fzi.de/ATLAS. We train and evaluate several state-of-the-art
traffic light detection architectures on ATLAS, demonstrating significant
performance improvements in both accuracy and robustness. Finally, we evaluate
the framework in real-world scenarios by deploying it in an autonomous vehicle
to make decisions at traffic light-controlled intersections, highlighting its
reliability and effectiveness for real-time operation. | 0 | zrz@andrew.cmu.edu [SEP] The ATLAS of Traffic Lights: A Reliable Perception Framework for Autonomous Driving : Traffic light perception is an essential component of the camera-based
perception system for autonomous vehicles, enabling accurate detection and
interpretation of traffic lights to ensure safe navigation through complex
urban environments. In this work, we propose a modularized perception framework
that integrates state-of-the-art detection models with a novel real-time
association and decision framework, enabling seamless deployment into an
autonomous driving stack. To address the limitations of existing public
datasets, we introduce the ATLAS dataset, which provides comprehensive
annotations of traffic light states and pictograms across diverse environmental
conditions and camera setups. This dataset is publicly available at
https://url.fzi.de/ATLAS. We train and evaluate several state-of-the-art
traffic light detection architectures on ATLAS, demonstrating significant
performance improvements in both accuracy and robustness. Finally, we evaluate
the framework in real-world scenarios by deploying it in an autonomous vehicle
to make decisions at traffic light-controlled intersections, highlighting its
reliability and effectiveness for real-time operation. | 327 |
Empirical Investigation of Factors that Influence Human Presence and Agency in Telepresence Robot | Nowadays, a community starts to find the need for human presence in an
alternative way, there has been tremendous research and development in
advancing telepresence robots. People tend to feel closer and more comfortable
with telepresence robots as many senses a human presence in robots. In general,
many people feel the sense of agency from the face of a robot, but some
telepresence robots without arm and body motions tend to give a sense of human
presence. It is important to identify and configure how the telepresence robots
affect a sense of presence and agency to people by including human face and
slight face and arm motions. Therefore, we carried out extensive research via
web-based experiment to determine the prototype that can result in soothing
human interaction with the robot. The experiments featured videos of a
telepresence robot n = 128, 2 x 2 between-participant study robot face factor:
video-conference, robot-like face; arm motion factor: moving vs. static) to
investigate the factors significantly affecting human presence and agency with
the robot. We used two telepresence robots: an affordable robot platform and a
modified version for human interaction enhancements. The findings suggest that
participants feel agency that is closer to human-likeness when the robot's face
was replaced with a human's face and without a motion. The robot's motion
invokes a feeling of human presence whether the face is human or robot-like. | Liked | jechoi@andrew.cmu.edu | Empirical Investigation of Factors that Influence Human Presence and Agency in Telepresence Robot : Nowadays, a community starts to find the need for human presence in an
alternative way, there has been tremendous research and development in
advancing telepresence robots. People tend to feel closer and more comfortable
with telepresence robots as many senses a human presence in robots. In general,
many people feel the sense of agency from the face of a robot, but some
telepresence robots without arm and body motions tend to give a sense of human
presence. It is important to identify and configure how the telepresence robots
affect a sense of presence and agency to people by including human face and
slight face and arm motions. Therefore, we carried out extensive research via
web-based experiment to determine the prototype that can result in soothing
human interaction with the robot. The experiments featured videos of a
telepresence robot n = 128, 2 x 2 between-participant study robot face factor:
video-conference, robot-like face; arm motion factor: moving vs. static) to
investigate the factors significantly affecting human presence and agency with
the robot. We used two telepresence robots: an affordable robot platform and a
modified version for human interaction enhancements. The findings suggest that
participants feel agency that is closer to human-likeness when the robot's face
was replaced with a human's face and without a motion. The robot's motion
invokes a feeling of human presence whether the face is human or robot-like. | 1 | jechoi@andrew.cmu.edu [SEP] Empirical Investigation of Factors that Influence Human Presence and Agency in Telepresence Robot : Nowadays, a community starts to find the need for human presence in an
alternative way, there has been tremendous research and development in
advancing telepresence robots. People tend to feel closer and more comfortable
with telepresence robots as many senses a human presence in robots. In general,
many people feel the sense of agency from the face of a robot, but some
telepresence robots without arm and body motions tend to give a sense of human
presence. It is important to identify and configure how the telepresence robots
affect a sense of presence and agency to people by including human face and
slight face and arm motions. Therefore, we carried out extensive research via
web-based experiment to determine the prototype that can result in soothing
human interaction with the robot. The experiments featured videos of a
telepresence robot n = 128, 2 x 2 between-participant study robot face factor:
video-conference, robot-like face; arm motion factor: moving vs. static) to
investigate the factors significantly affecting human presence and agency with
the robot. We used two telepresence robots: an affordable robot platform and a
modified version for human interaction enhancements. The findings suggest that
participants feel agency that is closer to human-likeness when the robot's face
was replaced with a human's face and without a motion. The robot's motion
invokes a feeling of human presence whether the face is human or robot-like. | 459 |
Development of a Feeding Assistive Robot Using a Six Degree of Freedom Robotic Arm | This project introduces a Feeding Assistive Robot tailored to individuals
with physical disabilities, including those with limited arm function or hand
control. The core component is a precise 6-degree freedom robotic arm, operated
seamlessly through voice commands. Integration of an Arduino-based Braccio Arm,
a distance sensor, and Bluetooth module enables voice-controlled movements. The
primary goal is to empower users to independently select and consume meals,
whether at a dining table or in bed. The system's adaptability, responsiveness,
and versatility in serving three different food items mark a significant
advancement in enhancing the quality of life for individuals with physical
challenges, promoting autonomy in daily activities. | Liked | jechoi@andrew.cmu.edu | Development of a Feeding Assistive Robot Using a Six Degree of Freedom Robotic Arm : This project introduces a Feeding Assistive Robot tailored to individuals
with physical disabilities, including those with limited arm function or hand
control. The core component is a precise 6-degree freedom robotic arm, operated
seamlessly through voice commands. Integration of an Arduino-based Braccio Arm,
a distance sensor, and Bluetooth module enables voice-controlled movements. The
primary goal is to empower users to independently select and consume meals,
whether at a dining table or in bed. The system's adaptability, responsiveness,
and versatility in serving three different food items mark a significant
advancement in enhancing the quality of life for individuals with physical
challenges, promoting autonomy in daily activities. | 1 | jechoi@andrew.cmu.edu [SEP] Development of a Feeding Assistive Robot Using a Six Degree of Freedom Robotic Arm : This project introduces a Feeding Assistive Robot tailored to individuals
with physical disabilities, including those with limited arm function or hand
control. The core component is a precise 6-degree freedom robotic arm, operated
seamlessly through voice commands. Integration of an Arduino-based Braccio Arm,
a distance sensor, and Bluetooth module enables voice-controlled movements. The
primary goal is to empower users to independently select and consume meals,
whether at a dining table or in bed. The system's adaptability, responsiveness,
and versatility in serving three different food items mark a significant
advancement in enhancing the quality of life for individuals with physical
challenges, promoting autonomy in daily activities. | 411 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.