ACL-OCL / Base_JSON /prefixM /json /mmsr /2021.mmsr-1.9.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:15:42.040333Z"
},
"title": "Teaching Arm and Head Gestures to a Humanoid Robot through Interactive Demonstration and Spoken Instructions",
"authors": [
{
"first": "Michael",
"middle": [
"Connolly"
],
"last": "Brady",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "DFKI",
"location": {
"addrLine": "Saarland Informatics Campus 66123",
"settlement": "Saarbr\u00fccken",
"country": "Germany"
}
},
"email": "michael.brady@dfki.de"
},
{
"first": "Han",
"middle": [],
"last": "Du",
"suffix": "",
"affiliation": {},
"email": "hanh.du@dfki.de"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe work in progress for training a humanoid robot to produce iconic arm and head gestures as part of task-oriented dialogic interaction. This involves the development of a multimodal dialogue manager and corresponding system architecture for non-experts to 'program' the robot through speech and vision. Using this system, videos of gesture demonstrations are collected. Motor positions are extracted from the videos to specify motor trajectories, where collections of motor trajectories are used to produce robot gestures following a Gaussian mixtures approach. Concluding discussion considers how learned representations may be used for gesture recognition by the robot, and how the core system may mature into a robust system to address language grounding and semantic representation.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe work in progress for training a humanoid robot to produce iconic arm and head gestures as part of task-oriented dialogic interaction. This involves the development of a multimodal dialogue manager and corresponding system architecture for non-experts to 'program' the robot through speech and vision. Using this system, videos of gesture demonstrations are collected. Motor positions are extracted from the videos to specify motor trajectories, where collections of motor trajectories are used to produce robot gestures following a Gaussian mixtures approach. Concluding discussion considers how learned representations may be used for gesture recognition by the robot, and how the core system may mature into a robust system to address language grounding and semantic representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A conventional way of programming robots to make iconic gestural movements is to animate movements as sequences of static motor positions. This method is slow and tedious and an easier method is sought. Ideally, people should be able to teach a robot how to make new gestures through visual demonstration and verbal instruction, as they might teach another person how to make a new arm and head gesture. Such a multimodal interactive approach is one of today's current challenges in robotics. Perhaps one reason that multimodal interaction with robots is problematic relates to the compartmentalization of research specialties. Speech engineers are generally not experts at computer vision and motor control. Likewise, robotics engineers and computer vision engineers tend to treat speech and language as a 'black box' problem best left to speech and language technologists. The result is that language, vision, and motor control tend to be segregated during software planning and im-plementation. It is left to the robot interaction engineer to cobble these segregated modalities together into a cohesive software framework. The broad aim of our project is to pragmatically address this challenge by developing a processing architecture where communicative information across modalities can be more integrated. Teaching a robot how to produce gestures through visual demonstration and spoken dialogue is a task that is well suited for addressing the challenge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Robot Learning from Demonstration (LfD), sometimes also referred to as \"robot programming by demonstration,\" \"teaching by example,\" or \"imitation learning\" is an established approach for training robots through vision. As alluded to above however, one issue with LfD is that LfD practitioners generally fail to incorporate the power of verbal instruction, see (Ravichandar et al., 2020) . We posit that with the relatively recent advent of Deep learning and related breakthroughs in computer vision, artificial speech recognition, and related technologies, the time is ripe to integrate natural verbal instruction with LfD.",
"cite_spans": [
{
"start": 360,
"end": 386,
"text": "(Ravichandar et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "LfD and training by example has a rich history and is a popular research area in modern robotics, for example see: (Calinon and Billard, 2007; Argall et al., 2009; Koenig et al., 2010; Calinon et al., 2010; Lee, 2017; Zhu and Hu, 2018; Ravichandar et al., 2020) . LfD sidesteps more traditional and tedious methods of manually specifying motor control or where math and computer programing expertise is required. The essence of LfD is that robot movements may be acquired by having a person act out the movements to be learned (either through telepresence, kinetically, or visually), and transposing those movements into representations that a robot may use in combination with the robot's knowledge and internal processing to then produce the movement. It is important to note that LfD is not merely a 'record and replay' technique. Generalization is required so that, for example, starting and ending positions of the movements are not pre-determined. Exact trajectories as well as amplitudes of movements may vary insofar as the task demands, and resulting movements should be robust in the face of changing environmental conditions and actuator imprecisions. For our present purpose, the idea is also to avoid exact monotonous repetitions, and to develop robust representations that may also be used for perceiving learned gestures.",
"cite_spans": [
{
"start": 115,
"end": 142,
"text": "(Calinon and Billard, 2007;",
"ref_id": "BIBREF4"
},
{
"start": 143,
"end": 163,
"text": "Argall et al., 2009;",
"ref_id": "BIBREF1"
},
{
"start": 164,
"end": 184,
"text": "Koenig et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 185,
"end": 206,
"text": "Calinon et al., 2010;",
"ref_id": "BIBREF5"
},
{
"start": 207,
"end": 217,
"text": "Lee, 2017;",
"ref_id": "BIBREF18"
},
{
"start": 218,
"end": 235,
"text": "Zhu and Hu, 2018;",
"ref_id": "BIBREF41"
},
{
"start": 236,
"end": 261,
"text": "Ravichandar et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Interacting with robots through natural language is another popular area of research. E.g. see: (Cantrell et al., 2010; She et al., 2014; Gemignani et al., 2015; Misra et al., 2018; Liu and Zhang, 2019; Kruijff-Korbayova et al., 2020) . Perhaps the most popular domain for linguistic information transfer between people and robots is in giving travel or route instructions, such as in the spoken guidance of robotic wheelchairs, for a review see: (Williams and Scheutz, 2017) .",
"cite_spans": [
{
"start": 96,
"end": 119,
"text": "(Cantrell et al., 2010;",
"ref_id": "BIBREF7"
},
{
"start": 120,
"end": 137,
"text": "She et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 138,
"end": 161,
"text": "Gemignani et al., 2015;",
"ref_id": "BIBREF10"
},
{
"start": 162,
"end": 181,
"text": "Misra et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 182,
"end": 202,
"text": "Liu and Zhang, 2019;",
"ref_id": "BIBREF20"
},
{
"start": 203,
"end": 234,
"text": "Kruijff-Korbayova et al., 2020)",
"ref_id": null
},
{
"start": 447,
"end": 475,
"text": "(Williams and Scheutz, 2017)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is important to note that speech communication also contains non-linguistic cues, both vocal (e.g. laughter, affect, tone) and non-vocal (e.g. gestures, eye gaze, face expressions, environmental context). For related review, see: (Mavridis, 2015; Devillers et al., 2020) . In addition to the linguistic signal, these and related cues should be readily available for incorporation into interaction designs.",
"cite_spans": [
{
"start": 233,
"end": 249,
"text": "(Mavridis, 2015;",
"ref_id": "BIBREF21"
},
{
"start": 250,
"end": 273,
"text": "Devillers et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The robot this work uses is \"VoxHead,\" a 3D printed humanoid robot (Brady, 2016; Devillers et al., 2020) . Figure 1 displays the robot. The robot serves as a life-sized and relatively low cost platform for interactive social robotics research. The robot has motors for mouth, eye cameras, and facial expressions. For the present work we do not concern ourselves with facial motors. Instead, focus is on general head, neck, and arm movements. In total there are sixteen degrees of freedom in the head, neck, and arms that we work with. Specifically we use: head tilt, head turn, neck tilt, neck turn, and for each arm: arm raise-lower, arm left-right, arm rotate, elbow bend, wrist rotate, and wrist bend. Hands with individual fingers or grippers are also not used here. Figure 2 depicts the general software plan. Sensory input to the robot is handled by a series of perception modules. A perception module may run on it's own mini-computer as e.g. an end-to-end DNN, or may run on a remote server, such as with an ASR engine. A countless number of perceptual processing modules may in theory be included, a few of which are portrayed here. For the present purpose of simplicity, only a speech-to-text ASR percep- (Amazon Transcribe) , and a skeletal tracking perceptual module (to be described in Section 2.2) are used. Input from these two sources is received by an Interaction Manager (IM). The IM collects sensory input based on a control signal from the Core Cognitive Architecture (CCA). Sensory input that is requested by the CCA feeds to an Information Store (IS), for cognitive processing. The IM also relays commands from the CCA to be executed by various production modules. Like with the perception modules, a countless number of production modules may be included, a few are portrayed, and for the present purpose only the two highlighted modules (speech synthesizer, and head and arms motors controller) are considered here.",
"cite_spans": [
{
"start": 67,
"end": 80,
"text": "(Brady, 2016;",
"ref_id": "BIBREF2"
},
{
"start": 81,
"end": 104,
"text": "Devillers et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 771,
"end": 779,
"text": "Figure 2",
"ref_id": null
},
{
"start": 1215,
"end": 1234,
"text": "(Amazon Transcribe)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "2"
},
{
"text": "The CCA is very much a work in progress. Skeletal tracking information is read by a Task Manager (TM), within the CCA for data processing (see Section 2.2), while linguistic representations and semantic gestures are read in by the Multimodal Dialogue Manager (MDM). Some multimodal dialogue managers have been proposed over the years, e.g.: (Wahlster, 2006; Sanders and Holzapfel, 2008; Peternel et al., 2014; Ond\u00e1\u0161 and Juh\u00e1r, 2015) . In developing the MDM, there are a variety of topics in human-robot communication to address. For a review, see: (Breazeal et al., 2004; Tellex et al., 2011; Ajoudani et al., 2018; Gluck and Laird, 2019) .",
"cite_spans": [
{
"start": 341,
"end": 357,
"text": "(Wahlster, 2006;",
"ref_id": "BIBREF39"
},
{
"start": 358,
"end": 386,
"text": "Sanders and Holzapfel, 2008;",
"ref_id": "BIBREF32"
},
{
"start": 387,
"end": 409,
"text": "Peternel et al., 2014;",
"ref_id": "BIBREF29"
},
{
"start": 410,
"end": 432,
"text": "Ond\u00e1\u0161 and Juh\u00e1r, 2015)",
"ref_id": "BIBREF27"
},
{
"start": 548,
"end": 571,
"text": "(Breazeal et al., 2004;",
"ref_id": "BIBREF3"
},
{
"start": 572,
"end": 592,
"text": "Tellex et al., 2011;",
"ref_id": "BIBREF37"
},
{
"start": 593,
"end": 615,
"text": "Ajoudani et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 616,
"end": 638,
"text": "Gluck and Laird, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Control Architecture",
"sec_num": "2.1"
},
{
"text": "We take inspiration from the above cited multimodal dialogue managers in combination with a more recently implemented open-source dialogue manager called VOnDa, (Kiefer et al., 2019) . Dialogue management using VOnDa is founded on the information state based approach (Traum and Larsson, 2003) . The information state contains the robot's state, including dialogue as well as domain specific information. Here, the information state may be extended by additional [multimodal] contextual knowledge. VOnDa's information state is represented as extended OWL ontologies and managed using a semantic repository and reasoner called HFC (Krieger and Willms, 2015). With VOnDa, changes in the robot's information state trigger a declarative rule system with statistical selection to generate a dialog act in response to the situation. A dialogue act generally results in the output of text (to be converted to speech), but may also be realized as motor control directives, and other modalities, such as affective cues for a text-to-speech synthesizer. For the MDM we are also pursuing how to incorporate a construction grammar approach with ontologies for language learning. See: (Steels, 2004; Oliva et al., 2012; Lindes and Laird, 2017) . We are also considering how our MDM may integrate with a VoxML approach (Pustejovsky and Krishnaswamy, 2016) .",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Kiefer et al., 2019)",
"ref_id": "BIBREF14"
},
{
"start": 268,
"end": 293,
"text": "(Traum and Larsson, 2003)",
"ref_id": "BIBREF38"
},
{
"start": 463,
"end": 475,
"text": "[multimodal]",
"ref_id": null
},
{
"start": 1172,
"end": 1186,
"text": "(Steels, 2004;",
"ref_id": "BIBREF36"
},
{
"start": 1187,
"end": 1206,
"text": "Oliva et al., 2012;",
"ref_id": "BIBREF26"
},
{
"start": 1207,
"end": 1230,
"text": "Lindes and Laird, 2017)",
"ref_id": "BIBREF19"
},
{
"start": 1305,
"end": 1341,
"text": "(Pustejovsky and Krishnaswamy, 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Control Architecture",
"sec_num": "2.1"
},
{
"text": "Output from the MDM is combined with output from the TM to assemble a control signal by the Interaction Planner (IP), to be interpreted and executed by the IM. This signal is implemented using an extensible markup protocol. The IM runs locally on the robot and is designed to be very fast, mainly handling interrupts and conflict resolution. Meanwhile, the CCA may be hosted on a super machine or distributed across machines with unlimited processing power. Though the control signal from the CCA via the IP is dynamically generated, stand-alone or static control scripts may be used in place of the CCA. This allows the IM and its processing modules to be tested in the absence of the CCA. This also allows the IM to be developed as a stand-alone Robot Operating System (ROS) package, to be used with other cognitive architectures. The use of static control scripts in place of the CCA converts our system architecture into a menu-driven dialogue system. That is, with static control scripts the IM may be regarded as something of a multimodal VoiceXML interpreter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control Architecture",
"sec_num": "2.1"
},
{
"text": "Consider the following scenario. A human trainer named John begins a learning session by saying something along the lines of \"okay robot, let's learn a new gesture.\" With this, the robot is triggered to enter 'gesture learning mode' and when the robot is ready with its front camera recording, the robot responds with some variation of \"okay, John, I'm ready.\" John then performs the body gesture that he wants the robot to learn. For example, let us consider a gesture to indicate 'stop' -the gesture a police officer might use when directing traffic and signaling a car to stop (as in Figure 1 , bottom left). While performing the gesture, John may give a verbal description, such as \"lift your hand like this, palm up and fingers stretched, and extend the arm forward.\" Once John has finished producing the gesture, he then says: \"that's it,\" and the robot acknowledges this by saying \"okay,\" or something analogous. The video recording of the gesture is then saved and processed into a labeled representation as described in Section 2.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 587,
"end": 595,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Control Architecture",
"sec_num": "2.1"
},
{
"text": "After processing and maybe after multiple examples of the desired gesture have been recorded, the robot should be ready to produce the gesture. In this case, the robot says something amounting to: \"shall I perform the gesture now?\" and John may respond with feedback indicating \"yes\" or \"no,\" prompting the robot to then execute the gesture or not. If there was a problem during processing, the robot may ask John to repeat the gesture. Once the robot has performed the gesture, the robot then asks: \"was that okay?\" and John may verbally respond \"yes, good\" while nodding his head 'yes' and-or giving a 'thumbs up' gesture. Or John may indicate 'no, let's try again' while shaking his head 'no' and giving a 'thumbs down' hand gesture (assuming yes/no head and hand gestures have been acquired by the robot). Either a verbal command or a visual command should be enough for the interaction to proceed. The robot might then say 'what does this gesture mean?' John would then explain the meaning of the gesture and the robot would store the gesture with a semantic label (e.g. 'stop').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control Architecture",
"sec_num": "2.1"
},
{
"text": "When in 'gesture recording mode,' the robot records a video of the person's complete motion. Each motion or gesture is stored in a buffer as a video example. The trainer (or multiple different trainers) can record the same motion multiple times, and the repetitions are stored as new examples under the same class. We use OpenPose (Cao et al., 2019) for its current superior performance in extracting 2D skeletal information from the recorded video examples. For representing and reconstructing 3D motions from the 2D poses, we deploy a dilated fully convolutional model (Pavllo et al., 2019) to estimate a 3D skeletal pose at each sampled frame. Each pose is represented as a set of Cartesian joint positions. Sequences of the extracted 3D positions are transformed into estimated motor positions for a single video example, and are saved as a motor trajectory. A motor trajectory takes the form of a matrix. The columns of the matrix correspond to motor channels of the robot, and rows of the matrix correspond to the passage of time. If a user is satisfied with a gesture reproduced by the robot, the video sample of the gesture may be discarded, and only the motor trajectory needs to be saved.",
"cite_spans": [
{
"start": 331,
"end": 349,
"text": "(Cao et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 571,
"end": 592,
"text": "(Pavllo et al., 2019)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gesture Acquisition",
"sec_num": "2.2"
},
{
"text": "Though the robot can produce a gesture based on a single example, it is better to generalize the motion under the same gesture label, assuming there are multiple examples for the same class of gesture. This is done to reflect naturalness that real people perform the same motion with a rich repertoire of variations. In order to capture these variations, we apply a mixture of Gaussians (Min and Chai, 2012) to generalize the distribution of the motion examples P (x) for each gesture. This is done following Equation 1.",
"cite_spans": [
{
"start": 387,
"end": 407,
"text": "(Min and Chai, 2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gesture Acquisition",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (x) = K k=1 \u03c6 k N (\u00b5 k (x), \u03c3 k (x))",
"eq_num": "(1)"
}
],
"section": "Gesture Acquisition",
"sec_num": "2.2"
},
{
"text": "One issue in combining multiple motor trajectories is that each motion example may have a different length, meaning the number of frames could vary. To address this, we define a canonical timeline and time normalize all motion examples in the same class to this canonical timeline. The resulting statistical motion model provides a compact way to represent each gesture as a set of discrete examples. With statistical motion models, gestures can be represented in a continuous manifold space. In the gesture production phase, if the robot is asked to perform a gesture (e,g. 'stop') without any additional constraints, our model can sample a random motion to be close to the examples with high likelihood. For the gestures with additional constraints, for instance, if the direction of the robot arm is specified, or the robot starts from an unusual initial pose, our model can formulize it as an optimization problem to find the best match in a continuous motion space. Following Equation 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gesture Acquisition",
"sec_num": "2.2"
},
{
"text": "arg max",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gesture Acquisition",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x P (x|c)",
"eq_num": "(2)"
}
],
"section": "Gesture Acquisition",
"sec_num": "2.2"
},
{
"text": "where c is a set of constraints, which can be target positions or orientations, and even some high level constraints. Furthermore, if an end effector position is specified, the statistical motion model can be coupled with inverse kinematics and-or a visual guidance system. Our system does not simply produce deterministic motions from examples, but is enabled to produce similar motions with new variations. In addition, our motion model can be continuously tuned by adding new examples. It should be noted that in estimating motor positions from Cartesian 3D joint data using inverse kinematics, there is 'motor bleed over.' This relates to how people's skeletons differ in size and proportion to each other and to the robot's skeleton. It is thus difficult to isolate desired robot motor movements for system calibration. An improved method for motor position estimation from skeletal data is desired and is a focus of current efforts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gesture Acquisition",
"sec_num": "2.2"
},
{
"text": "We have introduced the infrastructure of an interactive speech-vision-motor system for training a lifesized humanoid robot to produce desired arm and head gestures. The system interfaces a rudimentary cognitive architecture with an interaction manager for robot control. We use an LfD technique combined with spoken instructions and dialogue for training a robot to produce gestures. We lastly turn to consider the relationship between perception and action, the language grounding problem, and semantic representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3"
},
{
"text": "There is an intimate relationship between perception and action. The research industry surrounding the mirror neuron hypothesis reifies this (Hickok, 2014) In light of this, our current work also includes the development of a gesture recognition algorithm that depends on production learning. The time-normalized motor trajectories of a class from Section 2.2 define a centroid motor trajectory for the class. We call this centroid a gesture prototype. In short, a motor trajectory to be categorized is template-matched against the stored inventory of gesture prototypes using a multidimensional dynamic time warping algorithm (M\u00fcller, 2007) . The best match is taken as the gesture's category.",
"cite_spans": [
{
"start": 141,
"end": 155,
"text": "(Hickok, 2014)",
"ref_id": "BIBREF13"
},
{
"start": 627,
"end": 641,
"text": "(M\u00fcller, 2007)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3"
},
{
"text": "Plans are to develop our system to addresses the symbol grounding problem (Harnad, 1990; Steels, 2003; Cangelosi, 2010; Misra et al., 2016) . Establishing a socially situated and embodied system for interactive gesture learning was but a first step. Semantic meaning must be grounded in experience, where different modalities (speech, vision, motor feedback) are integrated. Interactive audio-visualmotor recordings from our system may be used for machine learning approaches, e.g. (Sant\u00edn et al., 2020) to train multi-modal speech recognizers. In order for meaning to emerge, the robot must 'understand' its own output. By pursuing a paradigm where gesture recognition is based on the robot's representations for gesture production, our hope is to depict representations to be one and the same for perception and production. In viewing speech as a problem of motor control, speech cognition becomes grounded in the robot's experience.",
"cite_spans": [
{
"start": 74,
"end": 88,
"text": "(Harnad, 1990;",
"ref_id": "BIBREF12"
},
{
"start": 89,
"end": 102,
"text": "Steels, 2003;",
"ref_id": "BIBREF35"
},
{
"start": 103,
"end": 119,
"text": "Cangelosi, 2010;",
"ref_id": "BIBREF6"
},
{
"start": 120,
"end": 139,
"text": "Misra et al., 2016)",
"ref_id": "BIBREF24"
},
{
"start": 482,
"end": 503,
"text": "(Sant\u00edn et al., 2020)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "3"
}
],
"back_matter": [
{
"text": "We would like to thank our colleagues for discussions and three anonymous reviewers for their feedback. This work is part of the research project XAINES, funded by grant No. 01IW20005 of the German Ministry for Education and Research (BMBF). Please find more information on XAINES here: https://www.dfki.de/en/ web/research/projects-and-publications/ projects-overview/projekt/xaines",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Progress and prospects of the human-robot collaboration",
"authors": [
{
"first": "Arash",
"middle": [],
"last": "Ajoudani",
"suffix": ""
},
{
"first": "Andrea",
"middle": [
"Maria"
],
"last": "Zanchettin",
"suffix": ""
},
{
"first": "Serena",
"middle": [],
"last": "Ivaldi",
"suffix": ""
},
{
"first": "Alin",
"middle": [],
"last": "Albu-Sch\u00e4ffer",
"suffix": ""
},
{
"first": "Kazuhiro",
"middle": [],
"last": "Kosuge",
"suffix": ""
},
{
"first": "Oussama",
"middle": [],
"last": "Khatib",
"suffix": ""
}
],
"year": 2018,
"venue": "Autonomous Robots",
"volume": "42",
"issue": "5",
"pages": "957--975",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arash Ajoudani, Andrea Maria Zanchettin, Serena Ivaldi, Alin Albu-Sch\u00e4ffer, Kazuhiro Kosuge, and Oussama Khatib. 2018. Progress and prospects of the human-robot collaboration. Autonomous Robots, 42(5):957-975.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A survey of robot learning from demonstration",
"authors": [
{
"first": "Sonia",
"middle": [],
"last": "Brenna D Argall",
"suffix": ""
},
{
"first": "Manuela",
"middle": [],
"last": "Chernova",
"suffix": ""
},
{
"first": "Brett",
"middle": [],
"last": "Veloso",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Browning",
"suffix": ""
}
],
"year": 2009,
"venue": "Robotics and autonomous systems",
"volume": "57",
"issue": "5",
"pages": "469--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. 2009. A survey of robot learn- ing from demonstration. Robotics and autonomous systems, 57(5):469-483.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A low cost desktop robot and tele-presence device for interactive speech research",
"authors": [
{
"first": "Michael Connolly",
"middle": [],
"last": "Brady",
"suffix": ""
}
],
"year": 2016,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Connolly Brady. 2016. A low cost desktop robot and tele-presence device for interactive speech research. In INTERSPEECH, 2016, San Francisco.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Teaching and working with robots as a collaboration",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Breazeal",
"suffix": ""
},
{
"first": "Guy",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Lockerd",
"suffix": ""
}
],
"year": 2004,
"venue": "AAMAS",
"volume": "4",
"issue": "",
"pages": "1030--1037",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Breazeal, Guy Hoffman, and Andrea Lockerd. 2004. Teaching and working with robots as a collab- oration. In AAMAS, volume 4, pages 1030-1037.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning of gestures by imitation in a humanoid robot",
"authors": [
{
"first": "Sylvain",
"middle": [],
"last": "Calinon",
"suffix": ""
},
{
"first": "Aude",
"middle": [],
"last": "Billard",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvain Calinon and Aude Billard. 2007. Learning of gestures by imitation in a humanoid robot. Techni- cal report, Cambridge University Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning and reproduction of gestures by imitation",
"authors": [
{
"first": "",
"middle": [],
"last": "Sylvain Calinon",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"L"
],
"last": "Florent D'halluin",
"suffix": ""
},
{
"first": "Darwin",
"middle": [
"G"
],
"last": "Sauser",
"suffix": ""
},
{
"first": "Aude G",
"middle": [],
"last": "Caldwell",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Billard",
"suffix": ""
}
],
"year": 2010,
"venue": "IEEE Robotics & Automation Magazine",
"volume": "17",
"issue": "2",
"pages": "44--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sylvain Calinon, Florent D'halluin, Eric L Sauser, Dar- win G Caldwell, and Aude G Billard. 2010. Learn- ing and reproduction of gestures by imitation. IEEE Robotics & Automation Magazine, 17(2):44-54.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Grounding language in action and perception: From cognitive agents to humanoid robots",
"authors": [
{
"first": "Angelo",
"middle": [],
"last": "Cangelosi",
"suffix": ""
}
],
"year": 2010,
"venue": "Physics of life reviews",
"volume": "7",
"issue": "2",
"pages": "139--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angelo Cangelosi. 2010. Grounding language in ac- tion and perception: From cognitive agents to hu- manoid robots. Physics of life reviews, 7(2):139- 151.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Robust spoken instruction understanding for hri",
"authors": [
{
"first": "Rehj",
"middle": [],
"last": "Cantrell",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Scheutz",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Schermerhorn",
"suffix": ""
},
{
"first": "Xuan",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2010,
"venue": "5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)",
"volume": "",
"issue": "",
"pages": "275--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rehj Cantrell, Matthias Scheutz, Paul Schermerhorn, and Xuan Wu. 2010. Robust spoken instruction un- derstanding for hri. In 2010 5th ACM/IEEE Inter- national Conference on Human-Robot Interaction (HRI), pages 275-282. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Openpose: realtime multiperson 2d pose estimation using part affinity fields",
"authors": [
{
"first": "Zhe",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Gines",
"middle": [],
"last": "Hidalgo",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "Shih-En",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Sheikh",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "43",
"issue": "",
"pages": "172--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2019. Openpose: realtime multi- person 2d pose estimation using part affinity fields. IEEE transactions on pattern analysis and machine intelligence, 43(1):172-186.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Spoken language interaction with virtual agents and robots (slivar): Towards effective and ethical interaction (dagstuhl seminar 2021)",
"authors": [
{
"first": "Laurence",
"middle": [],
"last": "Devillers",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Scheutz",
"suffix": ""
}
],
"year": 2020,
"venue": "Schloss Dagstuhl-Leibniz-Zentrum f\u00fcr Informatik",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurence Devillers, Tatsuya Kawahara, Roger K Moore, and Matthias Scheutz. 2020. Spoken lan- guage interaction with virtual agents and robots (slivar): Towards effective and ethical interaction (dagstuhl seminar 2021). In Dagstuhl Reports, vol- ume 10. Schloss Dagstuhl-Leibniz-Zentrum f\u00fcr In- formatik.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Teaching robots parametrized executable plans through spoken interaction",
"authors": [
{
"first": "Guglielmo",
"middle": [],
"last": "Gemignani",
"suffix": ""
},
{
"first": "Emanuele",
"middle": [],
"last": "Bastianelli",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Nardi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems",
"volume": "",
"issue": "",
"pages": "851--859",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guglielmo Gemignani, Emanuele Bastianelli, and Daniele Nardi. 2015. Teaching robots parametrized executable plans through spoken interaction. In Pro- ceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 851-859.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Interactive task learning. Interactive Task Learning: Humans, Robots, and Agents Acquiring New Tasks through Natural Interactions",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kevin",
"suffix": ""
},
{
"first": "John",
"middle": [
"E"
],
"last": "Gluck",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Laird",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "26",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin A Gluck and John E Laird. 2019. Interactive task learning. Interactive Task Learning: Humans, Robots, and Agents Acquiring New Tasks through Natural Interactions, 26:1.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The symbol grounding problem",
"authors": [
{
"first": "Stevan",
"middle": [],
"last": "Harnad",
"suffix": ""
}
],
"year": 1990,
"venue": "Physica D: Nonlinear Phenomena",
"volume": "42",
"issue": "1-3",
"pages": "335--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stevan Harnad. 1990. The symbol grounding prob- lem. Physica D: Nonlinear Phenomena, 42(1- 3):335-346.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The myth of mirror neurons: The real neuroscience of communication and cognition",
"authors": [
{
"first": "Gregory",
"middle": [],
"last": "Hickok",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory Hickok. 2014. The myth of mirror neurons: The real neuroscience of communication and cogni- tion. WW Norton & Company.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Vonda: A framework for ontology-based dialogue management",
"authors": [
{
"first": "Bernd",
"middle": [],
"last": "Kiefer",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Welker",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Biwer",
"suffix": ""
}
],
"year": 2019,
"venue": "International Workshop on Spoken Dialogue Systems Technology (IWSDS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernd Kiefer, Anna Welker, and Christophe Biwer. 2019. Vonda: A framework for ontology-based di- alogue management. In International Workshop on Spoken Dialogue Systems Technology (IWSDS).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Communication and knowledge sharing in human-robot interaction and learning from demonstration",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Koenig",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Takayama",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Matari\u0107",
"suffix": ""
}
],
"year": 2010,
"venue": "Neural Networks",
"volume": "23",
"issue": "8-9",
"pages": "1104--1112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Koenig, Leila Takayama, and Maja Matari\u0107. 2010. Communication and knowledge sharing in human-robot interaction and learning from demon- stration. Neural Networks, 23(8-9):1104-1112.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Extending owl ontologies by cartesian types to represent n-ary relations in natural language",
"authors": [
{
"first": "Ulrich",
"middle": [],
"last": "Hans",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Krieger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Willms",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Language and Ontologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans-Ulrich Krieger and Christian Willms. 2015. Ex- tending owl ontologies by cartesian types to repre- sent n-ary relations in natural language. In Proceed- ings of the 1st Workshop on Language and Ontolo- gies.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hanns-Peter Horn, and Karsten Bohlmann. 2020. Towards intuitive verbal and non-verbal communication for incidental robot-human encounters in clinic hallways",
"authors": [
{
"first": "Ivana",
"middle": [],
"last": "Kruijff-Korbayova",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Hackbarth",
"suffix": ""
},
{
"first": "Caspar",
"middle": [],
"last": "Jacob",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Kiefer",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Schmitt",
"suffix": ""
},
{
"first": "Tanja",
"middle": [],
"last": "Schneeberger",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Schwartz",
"suffix": ""
}
],
"year": null,
"venue": "Workshop on Incidental encounters with Robots in Public Spaces",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivana Kruijff-Korbayova, Johannes Hackbarth, Cas- par Jacob, Bernd Kiefer, Matthias Schmitt, Tanja Schneeberger, Tim Schwartz, Hanns-Peter Horn, and Karsten Bohlmann. 2020. Towards intuitive verbal and non-verbal communication for inciden- tal robot-human encounters in clinic hallways. In Astrid Rosenthal-von der Ptten, David Sirkin, Anna Abrams, Laura Platte (editor). Workshop on Inciden- tal encounters with Robots in Public Spaces, Cam- bridge United Kingdom Aachen University.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A survey of robot learning from demonstrations for human-robot collaboration",
"authors": [
{
"first": "Jangwon",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.08789"
]
},
"num": null,
"urls": [],
"raw_text": "Jangwon Lee. 2017. A survey of robot learning from demonstrations for human-robot collaboration. arXiv preprint arXiv:1710.08789.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cognitive modeling approaches to language comprehension using construction grammar",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Lindes",
"suffix": ""
},
{
"first": "John",
"middle": [
"E"
],
"last": "Laird",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 AAAI Spring Symposium Series",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Lindes and John E Laird. 2017. Cognitive mod- eling approaches to language comprehension using construction grammar. In 2017 AAAI Spring Sympo- sium Series.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A review of methodologies for natural-language-facilitated humanrobot cooperation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xiaoli",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Advanced Robotic Systems",
"volume": "16",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Liu and Xiaoli Zhang. 2019. A review of method- ologies for natural-language-facilitated human- robot cooperation. International Journal of Ad- vanced Robotic Systems, 16(3):1729881419851402.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A review of verbal and non-verbal human-robot interactive communication",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Mavridis",
"suffix": ""
}
],
"year": 2015,
"venue": "Robotics and Autonomous Systems",
"volume": "63",
"issue": "",
"pages": "22--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Mavridis. 2015. A review of verbal and non-verbal human-robot interactive communication. Robotics and Autonomous Systems, 63:22-35.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Motion graphs++ a compact generative model for semantic motion analysis and synthesis",
"authors": [
{
"first": "Jianyuan",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Jinxiang",
"middle": [],
"last": "Chai",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Transactions on Graphics (TOG)",
"volume": "31",
"issue": "6",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianyuan Min and Jinxiang Chai. 2012. Motion graphs++ a compact generative model for semantic motion analysis and synthesis. ACM Transactions on Graphics (TOG), 31(6):1-12.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Mapping instructions to actions in 3d environments with visual goal prediction",
"authors": [
{
"first": "Dipendra",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Bennett",
"suffix": ""
},
{
"first": "Valts",
"middle": [],
"last": "Blukis",
"suffix": ""
},
{
"first": "Eyvind",
"middle": [],
"last": "Niklasson",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Shatkhin",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.00786"
]
},
"num": null,
"urls": [],
"raw_text": "Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. 2018. Mapping instructions to actions in 3d environ- ments with visual goal prediction. arXiv preprint arXiv:1809.00786.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Tell me dave: Contextsensitive grounding of natural language to manipulation instructions",
"authors": [
{
"first": "K",
"middle": [],
"last": "Dipendra",
"suffix": ""
},
{
"first": "Jaeyong",
"middle": [],
"last": "Misra",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Ashutosh",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saxena",
"suffix": ""
}
],
"year": 2016,
"venue": "The International Journal of Robotics Research",
"volume": "35",
"issue": "1-3",
"pages": "281--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. 2016. Tell me dave: Context- sensitive grounding of natural language to manip- ulation instructions. The International Journal of Robotics Research, 35(1-3):281-300.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Dynamic time warping. Information retrieval for music and motion",
"authors": [
{
"first": "Meinard",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "69--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meinard M\u00fcller. 2007. Dynamic time warping. Infor- mation retrieval for music and motion, pages 69-84.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Ontology driven contextual best fit in embodied construction grammar",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Oliva",
"suffix": ""
},
{
"first": "Jerome",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Gilardi",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Dodge",
"suffix": ""
}
],
"year": 2012,
"venue": "International Workshop on Constraint Solving and Language Processing",
"volume": "",
"issue": "",
"pages": "133--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Oliva, Jerome Feldman, Luca Gilardi, and Ellen Dodge. 2012. Ontology driven contextual best fit in embodied construction grammar. In International Workshop on Constraint Solving and Language Pro- cessing, pages 133-151. Springer.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Event-based dialogue manager for multimodal systems",
"authors": [
{
"first": "Stanislav",
"middle": [],
"last": "Ond\u00e1\u0161",
"suffix": ""
},
{
"first": "Jozef",
"middle": [],
"last": "Juh\u00e1r",
"suffix": ""
}
],
"year": 2015,
"venue": "Emergent Trends in Robotics and Intelligent Systems",
"volume": "",
"issue": "",
"pages": "227--235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislav Ond\u00e1\u0161 and Jozef Juh\u00e1r. 2015. Event-based dialogue manager for multimodal systems. In Emer- gent Trends in Robotics and Intelligent Systems, pages 227-235. Springer.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "3d human pose estimation in video with temporal convolutions and semi-supervised training",
"authors": [
{
"first": "Dario",
"middle": [],
"last": "Pavllo",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Feichtenhofer",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "7753--7762",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dario Pavllo, Christoph Feichtenhofer, David Grang- ier, and Michael Auli. 2019. 3d human pose es- timation in video with temporal convolutions and semi-supervised training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 7753-7762.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Teaching robots to cooperate with humans in dynamic manipulation tasks based on multimodal human-in-the-loop approach",
"authors": [
{
"first": "Luka",
"middle": [],
"last": "Peternel",
"suffix": ""
},
{
"first": "Tadej",
"middle": [],
"last": "Petri\u010d",
"suffix": ""
},
{
"first": "Erhan",
"middle": [],
"last": "Oztop",
"suffix": ""
}
],
"year": 2014,
"venue": "Autonomous robots",
"volume": "36",
"issue": "1",
"pages": "123--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luka Peternel, Tadej Petri\u010d, Erhan Oztop, and Jan Babi\u010d. 2014. Teaching robots to cooperate with hu- mans in dynamic manipulation tasks based on multi- modal human-in-the-loop approach. Autonomous robots, 36(1):123-136.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Voxml: A visualization modeling language",
"authors": [
{
"first": "James",
"middle": [],
"last": "Pustejovsky",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Krishnaswamy",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.01508"
]
},
"num": null,
"urls": [],
"raw_text": "James Pustejovsky and Nikhil Krishnaswamy. 2016. Voxml: A visualization modeling language. arXiv preprint arXiv:1610.01508.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Recent advances in robot learning from demonstration. Annual Review of Control",
"authors": [
{
"first": "Harish",
"middle": [],
"last": "Ravichandar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Athanasios",
"suffix": ""
},
{
"first": "Sonia",
"middle": [],
"last": "Polydoros",
"suffix": ""
},
{
"first": "Aude",
"middle": [],
"last": "Chernova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Billard",
"suffix": ""
}
],
"year": 2020,
"venue": "Robotics, and Autonomous Systems",
"volume": "3",
"issue": "",
"pages": "297--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harish Ravichandar, Athanasios S Polydoros, Sonia Chernova, and Aude Billard. 2020. Recent advances in robot learning from demonstration. Annual Re- view of Control, Robotics, and Autonomous Systems, 3:297-330.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A dialogue manager for multimodal human-robot interaction and learning of a humanoid robot",
"authors": [
{
"first": "David",
"middle": [],
"last": "Sanders",
"suffix": ""
},
{
"first": "Hartwig",
"middle": [],
"last": "Holzapfel",
"suffix": ""
}
],
"year": 2008,
"venue": "Industrial Robot: An International Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Sanders and Hartwig Holzapfel. 2008. A dia- logue manager for multimodal human-robot interac- tion and learning of a humanoid robot. Industrial Robot: An International Journal.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Fast visual grounding in interaction: bringing few-shot learning with neural networks to an interactive robot",
"authors": [
{
"first": "Jos\u00e9 Miguel Cano",
"middle": [],
"last": "Sant\u00edn",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Dobnik",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Ghanimifard",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Probability and Meaning Conference",
"volume": "",
"issue": "",
"pages": "53--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Miguel Cano Sant\u00edn, Simon Dobnik, and Mehdi Ghanimifard. 2020. Fast visual grounding in inter- action: bringing few-shot learning with neural net- works to an interactive robot. In Proceedings of the Probability and Meaning Conference (PaM 2020), pages 53-61.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Teaching robots new actions through natural language instructions",
"authors": [
{
"first": "Lanbo",
"middle": [],
"last": "She",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Joyce",
"suffix": ""
},
{
"first": "Yunyi",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Shaohua",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Ning",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xi",
"suffix": ""
}
],
"year": 2014,
"venue": "The 23rd IEEE International Symposium on Robot and Human Interactive Communication",
"volume": "",
"issue": "",
"pages": "868--873",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lanbo She, Yu Cheng, Joyce Y Chai, Yunyi Jia, Shao- hua Yang, and Ning Xi. 2014. Teaching robots new actions through natural language instructions. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication, pages 868- 873. IEEE.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Evolving grounded communication for robots",
"authors": [
{
"first": "Luc",
"middle": [],
"last": "Steels",
"suffix": ""
}
],
"year": 2003,
"venue": "Trends in cognitive sciences",
"volume": "7",
"issue": "7",
"pages": "308--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luc Steels. 2003. Evolving grounded communication for robots. Trends in cognitive sciences, 7(7):308- 312.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Constructivist development of grounded construction grammars. ACL '04. Association for Computational Linguistics",
"authors": [
{
"first": "Luc",
"middle": [],
"last": "Steels",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luc Steels. 2004. Constructivist development of grounded construction grammars. ACL '04. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Understanding natural language commands for robotic navigation and mobile manipulation",
"authors": [
{
"first": "Stefanie",
"middle": [],
"last": "Tellex",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kollar",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Dickerson",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Walter",
"suffix": ""
},
{
"first": "Ashis",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Teller",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "25",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew Walter, Ashis Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding natural lan- guage commands for robotic navigation and mobile manipulation. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, volume 25.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The information state approach to dialogue management",
"authors": [
{
"first": "R",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "Staffan",
"middle": [],
"last": "Traum",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Larsson",
"suffix": ""
}
],
"year": 2003,
"venue": "Current and new directions in discourse and dialogue",
"volume": "",
"issue": "",
"pages": "325--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David R Traum and Staffan Larsson. 2003. The in- formation state approach to dialogue management. In Current and new directions in discourse and dia- logue, pages 325-353. Springer.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Dialogue systems go multimodal: The smartkom experience",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Wahlster",
"suffix": ""
}
],
"year": 2006,
"venue": "SmartKom: foundations of multimodal dialogue systems",
"volume": "",
"issue": "",
"pages": "3--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Wahlster. 2006. Dialogue systems go mul- timodal: The smartkom experience. In SmartKom: foundations of multimodal dialogue systems, pages 3-27. Springer.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The stateof-the-art in autonomous wheelchairs controlled through natural language: A survey",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Scheutz",
"suffix": ""
}
],
"year": 2017,
"venue": "Robotics and Autonomous Systems",
"volume": "96",
"issue": "",
"pages": "171--183",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Williams and Matthias Scheutz. 2017. The state- of-the-art in autonomous wheelchairs controlled through natural language: A survey. Robotics and Autonomous Systems, 96:171-183.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Robot learning from demonstration in robotic assembly: A survey",
"authors": [
{
"first": "Zuyuan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Huosheng",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2018,
"venue": "Robotics",
"volume": "7",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zuyuan Zhu and Huosheng Hu. 2018. Robot learning from demonstration in robotic assembly: A survey. Robotics, 7(2):17.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "'VoxHead' 3D printed humanoid robot",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "Figure 2: system architecture",
"type_str": "figure"
}
}
}
}