| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:30:26.084210Z" |
| }, |
| "title": "Interactive Reinforcement Learning for Table Balancing Robot", |
| "authors": [ |
| { |
| "first": "Haein", |
| "middle": [], |
| "last": "Jeon", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Robot Laboratory", |
| "institution": "Kyungpook National University", |
| "location": {} |
| }, |
| "email": "haeinjeon.knu@gmail.com" |
| }, |
| { |
| "first": "Yewon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Robot Laboratory Kyungpook National University", |
| "institution": "", |
| "location": {} |
| }, |
| "email": "yewonkim.knu@gmail.com" |
| }, |
| { |
| "first": "Boyeong", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "Artificial Intelligence Robot Laboratory Kyungpook National University", |
| "institution": "", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "With the development of robotics, the use of robots in daily life is increasing, which has led to the need for anyone to easily train robots to improve robot use. Interactive reinforcement learning(IARL) is a method for robot training based on human-robot interaction; prior studies on IARL provide only limited types of feedback or require appropriately designed shaping rewards, which is known to be difficult and time consuming. Therefore, in this study, we propose interactive deep reinforcement learning models based on voice feedback. In the proposed system, a robot learns the task of cooperative table balancing through deep Q-network using voice feedback provided by humans in real time, with automatic speech recognition(ASR) and sentiment analysis to understand human voice feedback. As a result, an optimal policy convergence rate of up to 96% was realized, and performance was improved in all voice feedback-based models.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "With the development of robotics, the use of robots in daily life is increasing, which has led to the need for anyone to easily train robots to improve robot use. Interactive reinforcement learning(IARL) is a method for robot training based on human-robot interaction; prior studies on IARL provide only limited types of feedback or require appropriately designed shaping rewards, which is known to be difficult and time consuming. Therefore, in this study, we propose interactive deep reinforcement learning models based on voice feedback. In the proposed system, a robot learns the task of cooperative table balancing through deep Q-network using voice feedback provided by humans in real time, with automatic speech recognition(ASR) and sentiment analysis to understand human voice feedback. As a result, an optimal policy convergence rate of up to 96% was realized, and performance was improved in all voice feedback-based models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Service robots equipped with artificial intelligence technology are increasing in daily life. Examples include museum exhibition guide robot (Thrun et al., 1999) , caf\u00e9-serving robot (Maxwell et al., 1999) , and object carrying robot (Yokoyama et al., 2003) . Robots increasingly perform tasks instead of or together with humans in various environments in daily life, and there has been an active research on robots that cooperate with humans (Calinon and Billard, 2007; Du et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 161, |
| "text": "(Thrun et al., 1999)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 183, |
| "end": 205, |
| "text": "(Maxwell et al., 1999)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 234, |
| "end": 257, |
| "text": "(Yokoyama et al., 2003)", |
| "ref_id": null |
| }, |
| { |
| "start": 443, |
| "end": 470, |
| "text": "(Calinon and Billard, 2007;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 471, |
| "end": 487, |
| "text": "Du et al., 2018)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Reinforcement learning (RL) --a robot learning technique-is a method in which an agent robot learns the action of obtaining maximum rewards through trial and error. In RL, rewards are generally given by agent action in a state, and if rewards are given through real-time human-agent interaction, it is called interactive reinforcement learning(IARL).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Reward shaping(RS) --an IARL method--is a technique in which a human trainer modifies reward functions by providing positive or negative feedback on the action of RL agents. In previous studies on IARL using natural language, the type of feedback is very limited using fewer than 10 feedbacks (Cruz et al., 2015; Tenorio-Gonzalez et al., 2010) .To facilitate the use of robots, the need for a training system through various feedbacks is raised so that robot training can be naturally performed using various voice feedbacks.", |
| "cite_spans": [ |
| { |
| "start": 293, |
| "end": 312, |
| "text": "(Cruz et al., 2015;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 313, |
| "end": 343, |
| "text": "Tenorio-Gonzalez et al., 2010)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Therefore, in this study, we propose an interactive deep RL model based on voice feedback to facilitate robot use. In the proposed system, a robot uses deep Q-networks(DQNs) (Mnih et al., 2013) to perform table balancing (Kim and Kang, 2020) tasks that require cooperation with humans and learns the RL policy through RS by human voice feedback. Using RS, a human trainer who collaborates table balancing task with robot and knows how to perform a task provides positive or negative feedback in real time about a robot's action via speech. Therefore, the agent provided with voice feedback learns the optimal policy--a policy that always leads to the balanced table state--faster and more naturally than when feedback is not used.", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 193, |
| "text": "(Mnih et al., 2013)", |
| "ref_id": null |
| }, |
| { |
| "start": 221, |
| "end": 241, |
| "text": "(Kim and Kang, 2020)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of the paper is organized as follows. Section 2 explores the flow and limitations of prior IARL studies through related work, and Section 3 describes the proposed interactive deep RL system based on voice feedback. In Section 4, we describe the results of table balancing task training based on the proposed system, and compare the difference in learning performance against conventional DQN as a baseline and between voice feedback provision types. Finally, Section 5 concludes this study and suggests future research directions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One of the strategies to improve learning performance in RL is that humans guide agents as external trainers. Representative examples include learning by imitation (Bandera et al., 2012 ), demonstration(Argall et al., 2009 Zhu and Hu, 2018) , and by feedback. Among them, focusing on feedbackproviding learning, we examine: (1) the design of IARL platforms that provide feedback through mouse or remote controls (Thomaz et al., 2006; Ullerstam and Mizukawa, 2004) , (2) design of IARL algorithms (Knox and Stone, 2009; Griffith et al., 2013; Faulkner et al., 2020) and 3studies of IARL through voice feedback (Tenorio-Gonzalez et al., 2010; Cruz et al., 2015) . What these studies have in common is that RS reduces training time and fosters the robot or computer to learn the target action.", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 185, |
| "text": "(Bandera et al., 2012", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 186, |
| "end": 222, |
| "text": "), demonstration(Argall et al., 2009", |
| "ref_id": null |
| }, |
| { |
| "start": 223, |
| "end": 240, |
| "text": "Zhu and Hu, 2018)", |
| "ref_id": null |
| }, |
| { |
| "start": 412, |
| "end": 433, |
| "text": "(Thomaz et al., 2006;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 434, |
| "end": 463, |
| "text": "Ullerstam and Mizukawa, 2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 496, |
| "end": 518, |
| "text": "(Knox and Stone, 2009;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 519, |
| "end": 541, |
| "text": "Griffith et al., 2013;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 542, |
| "end": 564, |
| "text": "Faulkner et al., 2020)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 609, |
| "end": 640, |
| "text": "(Tenorio-Gonzalez et al., 2010;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 641, |
| "end": 659, |
| "text": "Cruz et al., 2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Regarding methods that adopt hardware input devices, some approaches use a mouse or remote control to design an IARL platform (Thomaz et al., 2006; Ullerstam and Mizukawa, 2004) . Thomaz et al. (2006) revealed that IARL can improve robot's learning efficiency in an interactive Qlearning platform for cooking simulation robots, where humans can use mouse scrolls to provide feedback for robot actions by giving a number between -1 and +1. In the study of Ullerstam and Mizukawa (2004) , AIBO robots learned action sequences such as singing after hearing a command from a human feedback given by remote control. However, in these prior studies on the design of such an IARL platform, input hardware, such as a mouse and remote control, is required to provide human feedback, which is difficult to see as a natural interaction with human.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 147, |
| "text": "(Thomaz et al., 2006;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 148, |
| "end": 177, |
| "text": "Ullerstam and Mizukawa, 2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 180, |
| "end": 200, |
| "text": "Thomaz et al. (2006)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 455, |
| "end": 484, |
| "text": "Ullerstam and Mizukawa (2004)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Studies on developing IARL algorithms using human feedback include TAMER (Knox and Stone, 2009) , Advise (Griffith et al., 2013) and REPaIR algorithm (Faulkner et al., 2020) . In TAMER--an interactive reinforcement learning algorithm proposed by Knox and Stone (2009) --an agent learns a human feedback function by receiving two evaluation signals of positive and negative from the human on their keyboards; it was tested in Tetris game and mountain car problem. In Advise proposed by Griffith et al. (2013) , a human modifies an agent's action choice probability, i.e., the policy, by giving the agent binary feedback--positive or negative. As a result, Advise outperformed conventional RL algorithms on game tasks such as Pac-Man. Faulkner et al. (2020) proposed the RE-PaIR algorithm, which estimates the correctness of human feedback over time; virtual and physical robots performed tasks, such as putting a ball into the box in a simulation environment and grasping cup in the real world. They proved that the REPaIR algorithm matched or improved the performance of conventional Q-learning algorithms. However, these approachs that focused on feedback learning algorithms for IARL required the design of an appropriate shaping function, and additional time to calculate rewards or policies. Moreover, in the framework proposed in this study, natural language voice feedback is directly integrated into a reward so that the amount of additional computation required for DQN learning is relatively small.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 95, |
| "text": "(Knox and Stone, 2009)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 105, |
| "end": 128, |
| "text": "(Griffith et al., 2013)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 150, |
| "end": 173, |
| "text": "(Faulkner et al., 2020)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 246, |
| "end": 267, |
| "text": "Knox and Stone (2009)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 485, |
| "end": 507, |
| "text": "Griffith et al. (2013)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 733, |
| "end": 755, |
| "text": "Faulkner et al. (2020)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Studies that investigated IARL using natural language speech voice feedback itself include dynamic RS (Tenorio-Gonzalez et al., 2010) and IARL through speech guidance (Cruz et al., 2015) . Tenorio-Gonzalez et al. 2010showed that robots can use human voice feedback in RL to learn navigation tasks by assigning specific scalar rewards to feedback vocabulary, such as +100 to \"excellent\" and -10 to \"bad\" in simulation environments. Cruz et al. (2015) used voice commands and automatic speech recognition(ASR) to transcribe input voice commands, and then compared the input sentence and predefined lists using Levenshtein distance for cleaning tasks of robot arm agents. However, in these approaches using voice feedback, the RS function was designed by assigning a static reward value to a list of very limited words and sentences defined in advance. Therefore, when a feedback vocabulary that has not been defined in the list is input, the agent may have difficulty in learning. Moreover, the framework proposed in this study analyzes the positive and negative degrees of input voice feedback using a pretrained sentiment analysis module and converts it into a reward value. Therefore, no matter what feedback phrase is input, the sentiment polarity of voice feedback can be analyzed and used for DQN RL.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 133, |
| "text": "(Tenorio-Gonzalez et al., 2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 167, |
| "end": 186, |
| "text": "(Cruz et al., 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 431, |
| "end": 449, |
| "text": "Cruz et al. (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Through the examination of prior studies, we can summarize that IARL ordinarily improves learning performance. However, most studies did not adopt a natural interaction method with humans by requiring hardware input devices such as a keyboard or mouse. Further, studies using voice feedback used a small number of feedbacks. In this current study, we designed an IARL system for natural robot learning using voice feedback with ASR and sentiment analysis techniques to resolve these limitations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this section, we describe the proposed deep RL framework for table balancing robots based on voice feedback. The task that the robot aims to learn is to maintain balance when lifting a table cooperatively with a human. Figure 1 shows the overall work diagram of the proposed system. First, the robot takes a table state image with a camera and forwards it to the DQN. Next,the robot drives the balancing action predicted by DQN through image analysis. Then, the robot receives evaluative feedback from humans on the executed action; the voice feedback is input via the robot's microphone, converted to numerical values by voice feedback recognition and conversion module, and then incorporated into the environmental rewards of the DQN algorithm. Through repetition of the above process, the robot learns a policy in which the sum of environmental rewards and human voice feedback are maximized, and because of the learning, the robot can perform a cooperative table balancing task. In this work, the robot that will learn the table balancing task is Softbank's NAO robot, and the table is a rectangular box with width, length, and height of 31, 23, 6cm respectively. In addition, the table states to be used for learning were imaged using the lower camera mounted on the NAO robot. The environment selects a table state image from the training dataset and feeds it to the robot, which is a DQN agent. The robot determines the action in the current time step according to the -greedy policy, which selects a random action with a probability of for exploration. If no random action is selected, the agent chooses the action that maximizes the value of the Q function. The Q function that DQN aims to predict is as follows:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 222, |
| "end": 230, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Q \u03c0 (s, a) = E \u03c0 \u221e t=1 \u03b3 t r t (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where r is the reward that the robot receives when it moves to the next state from the current state by performing the action. The Q function is represented as the expected value of the cumulative reward received when executing the action a in state s, and \u03b3 is the discount rate, which reduces the influence of the Q value in the future state. After executing an action, the agent receives evaluative voice feedback from human and environmental rewards. Table 1 defines the environmental rewards of the proposed system. The environment provides a positive reward of +0.5 when the robot reaches the target state, the balancing maintenance state (s 0 ). A negative reward of \u22120.3 is given when the agent outputs an action that reaches a state other than the target. Finally the agent receives negative reward of \u22120.5 when returning an undefined action other than the one in the balancing task model in Kim and Kang (2020)'s work, such as returning a down while recognizing the human action state as s upup .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 455, |
| "end": 462, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Interactive voice feedback is a human speech evaluation of the robot's action. After checking the balance state of the table that has changed by the robot's action, the human provides positive voice feedback when the robot reaches the target state, and negative voice feedback otherwise. The provided voice feedback is converted into a numerical value through the voice feedback recognition and conversion module, and then added to the RL environment rewards. When the human provides voice feedback, the robot uses both feedback and envi-ronmental reward; and without feedback, the robot uses only environmental reward for learning. In Subsection 3.2, the voice feedback recognition and conversion module is described in depth.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In Algorithm 1, \u03b8 stands for the parameters of neural networks. DQN considers y t as a target and proceeds learning in a direction that reduces the error of y t and estimated Q(s t , a t ) by neural networks. Therefore, the DQN model is updated in every episode via the loss function L(\u03b8), which computes the mean squared error. With a repetitive update of \u03b8 in the direction of minimizing L(\u03b8), the Q function gets closer to the optimal state-action value function, and the agent learns the optimal action in the given state. Through this process, the robot can train DQN for table balancing with human voice feedback. To incorporate voice feedback in the DQN framework, we implemented voice feedback recognition and conversion module.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The voice feedback recognition and conversion module analyzed whether input voice feedback evaluated the robot's action positively or negatively. The voice feedback recognition and conversion module, shown in Figure 1 , consisted of two processes: ASR and sentiment analysis.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 209, |
| "end": 217, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Voice Feedback Recognition and Conversion Modules", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "First, the robot received an voice feedback signal from the microphone. ASR transcribed the signal into a character string and output it. We adopted Google Cloud speech-to-text as the ASR system, a cloud-based service that supported speech input and corresponding transcription in real time. This ASR system supports online streaming and offline voice audio processing, which was suitable for the agent's learning environment in our experimental setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Voice Feedback Recognition and Conversion Modules", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Using a string of sentences obtained through ASR, sentiment analysis identified the positive and negative degrees of voice feedback phrases. The analyzed sentiment was returned in real value between \u22121 and 1 with positive and negative feedback being closer to +1 and \u22121. Moreover, if ASR could not correctly recognize speech signal, this module takes feedback as 'none' and only uses environmental reward. Google Natural Language API was used for sentiment analysis because of the ease of processing and modifying the sentiment analysis results in the implementation process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Voice Feedback Recognition and Conversion Modules", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Converted value Well done 0.8 Fine 0.6 That is not how you do it \u22120.699 Try again \u22120.5 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feedback phrases", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we discuss the construction of a feedback dataset for the experiment, evaluation of the voice feedback recognition and conversion module, and verification of the proposed interactive deep RL model through experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "First, we constructed the voice feedback phrase dataset to test the proposed DQN model from corpora. The corpora used to build the dataset were Sentiment lexicon (Hu and Liu, 2004) , AFINN lexicon (Nielsen, 2011) , and Classroom English (Hong and Sohn, 2013) . A total of 100 feedback dataset phrases were extracted for experiments from the corpora, with 50 positive feedback phrases and 50 negative feedback. The feedback phrases were mainly short sentences or words that evaluated actions. Table 2 shows an example of some feedback phrases in the dataset and their converted sentiment analysis values which were incorporated in the RL reward function.", |
| "cite_spans": [ |
| { |
| "start": 162, |
| "end": 180, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 197, |
| "end": 212, |
| "text": "(Nielsen, 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 237, |
| "end": 258, |
| "text": "(Hong and Sohn, 2013)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 492, |
| "end": 499, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Voice Feedback Dataset and Recognition Rate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As a result of testing the recognition accuracy of Google Cloud speech-to-text, which is the ASR used in this study, the average sentence recognition rate was 86% using the built feedback phrase dataset. Three times of tests with the feedback phrase datasets on Google Natural Language APIs showed an average sentence recognition rate of 96%. An accuracy of less than 100% meant that the agent might receive an erroneous reward signal due to the malfunction of the voice feedback recognition and conversion module. In this study, all cases in which wrong rewards were given from malfunction of ASR or sentiment analysis were considered, and it was confirmed via experiments that using interactive voice feedback could foster the agent's target task learning despite such errors. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Voice Feedback Dataset and Recognition Rate", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In this paper, we employed two voice feedback models: consecutive voice feedback (Consec-VF) and periodical voice feedback (Prdc-VF) models ( Figure 2 ). During the training, the human can provide (1) Consec-VF in the early stages of learning, or (2) Prdc-VF throughout learning. Consec-VF provided 100 consecutive feedback earlier in training, and Prdc-VF provided 10 feedbacks every 2,000 episodes. Training was conducted in simulation where random state images are given in every episode and human trainer provides voice feedback via microphone while observing the next state. We also run experiments on a physical NAO robot as a proof of concept, and robot training video can be found at this link. (http://air.knu.ac.kr/index.php/evolutionarycooperative-robot-development-using-distributeddeep-reinforcement-learning) We compared the two feedback-providing models with conventional DQN without voice feedback as a baseline. Additional four optimizer comparison experiments were conducted on Consec-VF. We conducted 30 experiments for each model setting and evaluated the performance by calculating the optimal policy convergence rate after the training. Hyperparameter settings for training DQNs are shown in Table 3 . All hyperparameter settings, except the number of voice feedbacks, were equally applicable to both the proposed IARL model and baseline model-DQNs. We analyze the difference in model performance by the two methods of providing interactive voice feedback: Consec-VF and Prdc-VF. Voice feedback was provided 100 times out of 20,000 episodes (Table 3), and other episodes only used environmental rewards from Table 1 . The Consec-VF model is designed to intensively feed voice feedback at the beginning of learning to establish the initial learning direction, whereas Prdc-VF model is designed to reflect human feedback steadily in the overall learning process so that human feedback could be consistently reflected. Table 4 shows the results of experiment with two optimizers by applying the hyperparameter settings of Table 3 to the two voice feedback models and baseline DQNs. First, for the Consec-VF model, the optimal policy convergence rate was 86% and 96% when SGD and Adam optimizers were used, showing higher performance than the baseline with optimal policy convergence rates of 80% and 73% , respectively. Particularly, the convergence rate of 96% where 29 of 30 experiments learned optimal policies with Adam optimizer showed that combining Consec-VF with DQN significantly improved model performance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 150, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 1214, |
| "end": 1221, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1630, |
| "end": 1637, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1938, |
| "end": 1945, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 2041, |
| "end": 2048, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interactive Voice Feedback DQN Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Moreover, the Prdc-VF model showed lower performance than the Consec-VF and baseline models, which could be analyzed by training loss graphs. Figure 3 shows the training loss of the baseline, Consec-VF, and Prdc-VF models. In Figure 3-(a) and -(b), the loss stably converged to zero in the Consec-VF baseline model. However, in the Prdc-VF model in Figure 3-(c) , loss spikes were ob-Optimizer Baseline Consec-VF SGD 80% 86% Adam 73 % 96% Adagrad 43 % 56% Adadelta 63 % 76% Table 5 : Optimal policy convergence rate of the baseline and Consec-VF models using four different optimizers served during the training process. We analyzed that the intermittent intervention of voice feedback interfered with the convergence of losses during the training, resulting in a lower performance of the Prdc-VF model compared with others. Experiment results showed that the Consec-VF model learned optimal policies better than baseline and Prdc-VF models. As in-depth experiments, we examine the results of the experiment by adding Adagrad, Adalta optimizers to the Consec-VF model to ensure that the use of Consec-VF consistently leads to model learning performance. Table 5 shows the optimal policy convergence rate after 30 experiments on the Consec-VF and baseline model on four optimizers. In all experiments Consec-VF showed improved optimal policy learning compared to the baseline DQN. These experiment results indicated that incorporating interactive voice feedback into DQN for table balancing tasks improved model learning performance in all optimizer settings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 150, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 226, |
| "end": 238, |
| "text": "Figure 3-(a)", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 349, |
| "end": 361, |
| "text": "Figure 3-(c)", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 474, |
| "end": 481, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 1154, |
| "end": 1161, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interactive Voice Feedback DQN Model", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this study, we proposed an interactive deep RL model based on voice feedback for table balancing robot. The proposed system suggests DQN incorporating human voice feedback using ASR and sentiment analysis, where feedback given by humans are incorporated into the reward function. Experiment results show that the Consec-VF model, which pro-vides Consec-VF early in learning, achieves an optimal policy convergence rate higher than the baseline model in all optimizer settings. There are several areas of extensions of our approach. Future direction for our work includes incorporating multimodal feedback to DQN using various robot sensors. We could also focus on deepening model optimization technique that improves learning performance of interactive RL model in varying settings. Robot could also learn when to use feedback and when to discard it or incorporate text semantics such as guiding robot behavior. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was supported by the National Research Foundation of Korea funded by the Korean Government under grant NRF-2019R1A2C1011270 and by the BK21 FOUR (Fostering Outstanding Universities for Research) funded by the Ministry of Education, Department of Artificial Intelligence, Kyungpook National University, Korea (I20SS7610128).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A survey of robot learning from demonstration", |
| "authors": [ |
| { |
| "first": "Sonia", |
| "middle": [], |
| "last": "Brenna D Argall", |
| "suffix": "" |
| }, |
| { |
| "first": "Manuela", |
| "middle": [], |
| "last": "Chernova", |
| "suffix": "" |
| }, |
| { |
| "first": "Brett", |
| "middle": [], |
| "last": "Veloso", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Browning", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Robotics and autonomous systems", |
| "volume": "57", |
| "issue": "5", |
| "pages": "469--483", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. 2009. A survey of robot learn- ing from demonstration. Robotics and autonomous systems, 57(5):469-483.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A survey of vision-based architectures for robot learning by imitation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jp Bandera", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rodriguez", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Molina-Tanco", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bandera", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "International Journal of Humanoid Robotics", |
| "volume": "9", |
| "issue": "01", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "JP Bandera, JA Rodriguez, L Molina-Tanco, and A Bandera. 2012. A survey of vision-based architec- tures for robot learning by imitation. International Journal of Humanoid Robotics, 9(01):1250006.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Active teaching in robot programming by demonstration", |
| "authors": [ |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Calinon", |
| "suffix": "" |
| }, |
| { |
| "first": "Aude", |
| "middle": [], |
| "last": "Billard", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "RO-MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication", |
| "volume": "", |
| "issue": "", |
| "pages": "702--707", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sylvain Calinon and Aude Billard. 2007. Active teach- ing in robot programming by demonstration. In RO- MAN 2007-The 16th IEEE International Symposium on Robot and Human Interactive Communication, pages 702-707. IEEE.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Interactive reinforcement learning through speech guidance in a domestic scenario", |
| "authors": [ |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Cruz", |
| "suffix": "" |
| }, |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Twiefel", |
| "suffix": "" |
| }, |
| { |
| "first": "Sven", |
| "middle": [], |
| "last": "Magg", |
| "suffix": "" |
| }, |
| { |
| "first": "Cornelius", |
| "middle": [], |
| "last": "Weber", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Wermter", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "2015 international joint conference on neural networks (IJCNN)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francisco Cruz, Johannes Twiefel, Sven Magg, Cor- nelius Weber, and Stefan Wermter. 2015. Interac- tive reinforcement learning through speech guidance in a domestic scenario. In 2015 international joint conference on neural networks (IJCNN), pages 1-8. IEEE.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Online robot teaching with natural human-robot interaction", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "IEEE Transactions on Industrial Electronics", |
| "volume": "65", |
| "issue": "12", |
| "pages": "9571--9581", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/TIE.2018.2823667" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Du, M. Chen, C. Liu, B. Zhang, and P. Zhang. 2018. Online robot teaching with natural human-robot in- teraction. IEEE Transactions on Industrial Electron- ics, 65(12):9571-9581.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Interactive reinforcement learning with inaccurate feedback", |
| "authors": [ |
| { |
| "first": "Taylor A Kessler", |
| "middle": [], |
| "last": "Faulkner", |
| "suffix": "" |
| }, |
| { |
| "first": "Elaine", |
| "middle": [ |
| "Schaertl" |
| ], |
| "last": "Short", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [ |
| "L" |
| ], |
| "last": "Thomaz", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "2020 IEEE International Conference on Robotics and Automation (ICRA)", |
| "volume": "", |
| "issue": "", |
| "pages": "7498--7504", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor A Kessler Faulkner, Elaine Schaertl Short, and Andrea L Thomaz. 2020. Interactive reinforcement learning with inaccurate feedback. In 2020 IEEE In- ternational Conference on Robotics and Automation (ICRA), pages 7498-7504. IEEE.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Policy shaping: Integrating human feedback with reinforcement learning", |
| "authors": [ |
| { |
| "first": "Shane", |
| "middle": [], |
| "last": "Griffith", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaushik", |
| "middle": [], |
| "last": "Subramanian", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Scholz", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Charles", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [ |
| "L" |
| ], |
| "last": "Isbell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Thomaz", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Georgia Institute of Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L Isbell, and Andrea L Thomaz. 2013. Pol- icy shaping: Integrating human feedback with rein- forcement learning. Georgia Institute of Technol- ogy.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Cooperative robot for table balancing using q-learning. The Journal of", |
| "authors": [ |
| { |
| "first": "Yewon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Bo-Yeong", |
| "middle": [], |
| "last": "Kang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Korea Robotics Society", |
| "volume": "15", |
| "issue": "4", |
| "pages": "404--412", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yewon Kim and Bo-Yeong Kang. 2020. Cooperative robot for table balancing using q-learning. The Jour- nal of Korea Robotics Society, 15(4):404-412.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Interactively shaping agents via human reinforcement: The tamer framework", |
| "authors": [ |
| { |
| "first": "Bradley", |
| "middle": [], |
| "last": "Knox", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Stone", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the fifth international conference on Knowledge capture", |
| "volume": "", |
| "issue": "", |
| "pages": "9--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W Bradley Knox and Peter Stone. 2009. Interactively shaping agents via human reinforcement: The tamer framework. In Proceedings of the fifth international conference on Knowledge capture, pages 9-16.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Alfred: The robot waiter who remembers you", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bruce", |
| "suffix": "" |
| }, |
| { |
| "first": "Lisa", |
| "middle": [ |
| "A" |
| ], |
| "last": "Maxwell", |
| "suffix": "" |
| }, |
| { |
| "first": "Nii", |
| "middle": [], |
| "last": "Meeden", |
| "suffix": "" |
| }, |
| { |
| "first": "Laura", |
| "middle": [], |
| "last": "Addo", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Jane", |
| "middle": [], |
| "last": "Dickson", |
| "suffix": "" |
| }, |
| { |
| "first": "Seth", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Eli", |
| "middle": [], |
| "last": "Olshfski", |
| "suffix": "" |
| }, |
| { |
| "first": "Jordan", |
| "middle": [], |
| "last": "Silk", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wales", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of AAAI workshop on robotics", |
| "volume": "", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bruce A Maxwell, Lisa A Meeden, Nii Addo, Laura Brown, Paul Dickson, Jane Ng, Seth Olshfski, Eli Silk, and Jordan Wales. 1999. Alfred: The robot waiter who remembers you. In Proceedings of AAAI workshop on robotics, pages 1-12. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning", |
| "authors": [ |
| { |
| "first": "Volodymyr", |
| "middle": [], |
| "last": "Mnih", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Silver", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1312.5602" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Volodymyr Mnih, Koray Kavukcuoglu, David Sil- ver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Policy invariance under reward transformations: Theory and application to reward shaping", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "Daishi", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Stuart", |
| "middle": [], |
| "last": "Harada", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Russell", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Icml", |
| "volume": "99", |
| "issue": "", |
| "pages": "278--287", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Y Ng, Daishi Harada, and Stuart Russell. 1999. Policy invariance under reward transforma- tions: Theory and application to reward shaping. In Icml, volume 99, pages 278-287.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs", |
| "authors": [ |
| { |
| "first": "Finn\u00e5rup", |
| "middle": [], |
| "last": "Nielsen", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1103.2903" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Finn\u00c5rup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Dynamic reward shaping: training a robot by voice", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Ana", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduardo", |
| "middle": [ |
| "F" |
| ], |
| "last": "Tenorio-Gonzalez", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Morales", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Villasenor-Pineda", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Ibero-American conference on artificial intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "483--492", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ana C Tenorio-Gonzalez, Eduardo F Morales, and Luis Villasenor-Pineda. 2010. Dynamic reward shap- ing: training a robot by voice. In Ibero-American conference on artificial intelligence, pages 483-492. Springer.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Reinforcement learning with human teachers: Evidence of feedback and guidance with implications for learning performance", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Lockerd Thomaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Cynthia", |
| "middle": [], |
| "last": "Breazeal", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Aaai", |
| "volume": "6", |
| "issue": "", |
| "pages": "1000--1005", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Lockerd Thomaz, Cynthia Breazeal, et al. 2006. Reinforcement learning with human teachers: Evi- dence of feedback and guidance with implications for learning performance. In Aaai, volume 6, pages 1000-1005. Boston, MA.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Minerva: a second-generation museum tour-guide robot", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Thrun", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Bennewitz", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Burgard", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "B" |
| ], |
| "last": "Cremers", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Dellaert", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Fox", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Hahnel", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Rosenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Schulte", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Schulz", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C)", |
| "volume": "3", |
| "issue": "", |
| "pages": "1999--2005", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ROBOT.1999.770401" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Thrun, M. Bennewitz, W. Burgard, A. B. Cre- mers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz. 1999. Minerva: a second-generation museum tour-guide robot. In Pro- ceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C), volume 3, pages 1999-2005 vol.3.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Interactive deep reinforcement learning model for table balancing based on human voice feedback", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "Comparison of Consec-VF and Prdc-VF model", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "Loss graph of models", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF0": { |
| "num": null, |
| "content": "<table><tr><td>Agent action</td><td>Reward</td><td/></tr><tr><td>Reaching the target state</td><td>+0.5</td><td/></tr><tr><td>Returning undefined action</td><td>-0.5</td><td/></tr><tr><td>Reaching non-target states</td><td>-0.3</td><td/></tr><tr><td/><td/><td>Let rt\u2190rt + ft</td></tr><tr><td/><td colspan=\"2\">end if</td></tr><tr><td/><td/><td>rt</td><td>if episode done at step t + 1</td></tr><tr><td/><td>yt =</td><td>rt + \u03b3max a \u2208AQ (s , a ; \u03b8 \u2212 ))</td><td>otherwise</td></tr><tr><td/><td/><td/><td>(1)</td></tr><tr><td/><td colspan=\"3\">Perform a gradient descent step on</td></tr><tr><td/><td colspan=\"3\">L(\u03b8) = E[(yt \u2212 Q(st, at; \u03b8t)) 2 ]</td></tr><tr><td/><td colspan=\"3\">with respect to the network parameters \u03b8</td></tr><tr><td/><td colspan=\"2\">Every 5 steps reset \u03b8 \u2212 = \u03b8</td></tr><tr><td/><td colspan=\"2\">end for</td></tr><tr><td/><td>end for</td><td/></tr><tr><td/><td colspan=\"3\">3.1 Deep Reinforcement Learning Process</td></tr><tr><td/><td colspan=\"3\">Based on Voice Feedback</td></tr><tr><td/><td colspan=\"3\">The robot in the proposed system uses the DQN to</td></tr><tr><td/><td colspan=\"3\">recognize the table state image and output the table</td></tr><tr><td/><td colspan=\"3\">balancing action based on human voice feedback.</td></tr><tr><td/><td colspan=\"3\">A DQN combines Q-learning with a deep convo-</td></tr><tr><td/><td colspan=\"3\">lutional neural network to estimate a state-action</td></tr><tr><td/><td colspan=\"3\">value function (Q function) given an input image</td></tr><tr><td/><td colspan=\"2\">and action.</td></tr></table>", |
| "text": "Algorithm 1 Interactive Deep Q-Network Based on Voice Feedback Initialize action-value function with random weights \u03b8 Initialize target action-value functionQ with random weights \u03b8 \u2212 = \u03b8 for episodes = 1, 20000 do Initialize sequence for t = 1, T do Get table state image st = xt With probability select a random action at Otherwise select at = argmax a\u2208A Qt(st, at) Execute action at and observe reward rt and image xt+1 if Human trainer provides voice feedback ft on state st then Depending on the degree of raising and balancing state of the table, the human action states are divided into five in our system: up (s up ), keep (s 0 ), down (s down ), up a lot(s upup ) and down a lot (s downdown ). The subscripts of s represent human actions. The robot executes the table balancing action a by adjusting the knee joint drive value. Five robot actions are defined depending on the direction and degree of table movement: a up , a up , a 0 , a down , and a down .Algorithm 1 represents the training process of an interactive DQN based on voice feedback. This training process is identical to the DQN training process;an interactive voice feedback-based process is added after the robot action operation. The input state s is a table image(x t ),which is an RGB image of 128 \u00d7 170 size representing the balance status of the table imaged by the robot camera.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "content": "<table/>", |
| "text": "Examples of feedback phrase with converted numeric value.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "content": "<table/>", |
| "text": "Hyperparameters of DQN training.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF4": { |
| "num": null, |
| "content": "<table><tr><td>: Optimal policy convergence rate of 3 experi-</td></tr><tr><td>mental model</td></tr></table>", |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF5": { |
| "num": null, |
| "content": "<table><tr><td>Zuyuan Zhu and Huosheng Hu. 2018. Robot learning</td></tr><tr><td>from demonstration in robotic assembly: A survey.</td></tr><tr><td>Robotics, 7(2):17.</td></tr></table>", |
| "text": "Mans Ullerstam and Makoto Mizukawa. 2004. Teaching robots behavior patterns by using reinforcement learning: how to raise pet robots with a remote control. In SICE 2004 Annual Conference, volume 1, pages 143-146. IEEE. Kazuhiko Yokoyama, Hiroyuki Handa, Takakatsu Isozumi, Yutaro Fukase, Kenji Kaneko, Fumio Kanehiro, Yoshihiro Kawai, Fumiaki Tomita, and Hirohisa Hirukawa. 2003. Cooperative works by a human and a humanoid robot. In 2003 IEEE International Conference on Robotics and Automation (Cat. No. 03CH37422), volume 3, pages 2985-2991. IEEE.", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |