id
stringlengths
36
36
source
stringclasses
15 values
formatted_source
stringclasses
13 values
text
stringlengths
2
7.55M
e1384076-d435-4448-8c06-77fa00ef7c69
trentmkelly/LessWrong-43k
LessWrong
The Roots of Progress's 2022 in review 2022 was another big year for me and for The Roots of Progress. This is my annual review—the one post a year (other than timely announcements) where I go meta and give an update on this project. If you want stuff like this more frequently, you can support me on Patreon or make a donation to get my monthly supporter update. This year had several highlights. We announced a major expansion of this nonprofit effort and hired a CEO to lead it. I concluded my lecture series, “The Story of Civilization,” and am now writing a book based on the same content. I was interviewed for major publications, spoke at some of the top progress conferences, and co-hosted a couple of events myself. Most importantly, I had a couple of banger tweets. But I’m going to bury all of those ledes in order to start, as is my tradition, with what I suspect is more interesting to my audience: a selection of this year’s… Reading The book that fascinated me most this year was American Genesis: A Century of Invention and Technological Enthusiasm, 1870–1970, by Thomas P. Hughes, a finalist for the 1990 Pulitzer. The book is not only about the century of technological enthusiasm, but also about how that enthusiasm (in my opinion) went wrong, and how it came to an end. My review of this book was so long that I broke it into two parts: one on American invention from the “heroic age” to the system-building era and one on the transition from technocracy to the counterculture. (I may at some point do a third part, on the aesthetic reaction to modernism.) Among the more mindblowing facts I learned from this book are that Stalin made “American efficiency” a part of Soviet doctrine, and that Ford’s autobiography “was read with a zeal usually reserved for the study of Lenin.” Overall this greatly strengthened my understanding of technocracy, one of my themes for this year (see below). A close runner-up for favorite book I read this year was The Control of Nature, by John McPhee. The book tells three stories
8a0f236b-cc02-47ef-8c7c-7de0fb74e639
trentmkelly/LessWrong-43k
LessWrong
Knowledge is not just mutual information Financial status: This is independent research. I welcome financial support to make further posts like this possible. Epistemic status: This is in-progress thinking. ---------------------------------------- This post is part of a sequence on the accumulation of knowledge. Our goal is to articulate what it means for knowledge to accumulate within a physical system. The challenge is this: given a closed physical system, if I point to a region and tell you that knowledge is accumulating in this region, how would you test my claim? What are the physical characteristics of the accumulation of knowledge? We do not take some agent as the fundamental starting point but instead take a mechanistic physical system as the starting point, and look for a definition of knowledge in terms of physical patterns. The previous post looked at measuring the resemblance between some region and its environment as a possible definition of knowledge and found that it was not able to account for the range of possible representations of knowledge. This post will explore mutual information between a region within a system and the remainder of the system as a definition of the accumulation of knowledge. Formally, the mutual information between two objects is the gap between the entropy of the two objects considered as a whole, and the sum of the entropy of the two objects considered separately. If knowing the configuration of one object tells us nothing about the configuration of the other object, then the entropy of the whole will be exactly equal to the sum of the entropy of the parts, meaning there is no gap, in which case the mutual information between the two objects is zero. To the extent that knowing the configuration of one object tells us something about the configuration of the other, the mutual information between them is greater than zero. Specifically, if we would have had to ask some number N of yes-or-no questions to identify the configuration of the environment without an
adf56550-c1a6-4e7c-92ce-a34220c977fa
trentmkelly/LessWrong-43k
LessWrong
[AN #89]: A unifying formalism for preference learning algorithms Find all Alignment Newsletter resources here. In particular, you can sign up, or look through this spreadsheet of all summaries that have ever been in the newsletter. I'm always happy to hear feedback; you can send it to me by replying to this email. Audio version here (may not be up yet). Highlights Reward-rational (implicit) choice: A unifying formalism for reward learning (Hong Jun Jeon, Smitha Milli et al) (summarized by Rohin): We've got algorithms for learning preferences from demonstrations (AN #12) (possibly ranked (AN #60)), comparisons (AN #67), proxy rewards (AN #69), and even the observed state (AN #45). The insight of this paper is that these are all instances of a simple underlying formalism. Specifically, these forms of preference learning can be described by two properties: (1) the set of choices that the human picks from and (2) how each choice corresponds to a distribution over agent trajectories. Given these properties, we assume that the human makes their choice according to a Boltzmann-rational model (where the human is more likely to choose an option if it leads to higher expected reward). We have now specified a likelihood over the choice given the reward, and we can use Bayes rule to infer a distribution over the reward given the human's choice. Consider more exotic types of feedback, such as the human's decision to turn the agent off (AN #69). Here, the human has two options: turning the agent off (corresponding to the agent staying still forever), or letting it continue (corresponding to the agent taking the trajectory that maximizes its current expected reward). If the agent has the right reward function, then the Boltzmann rational human would let it continue; as a result, if the human instead tries to turn the agent off, Bayes Rule allows the agent to infer that its belief about the reward must be wrong. Thus, even this decision of whether to turn the agent off can be captured in this framework. The paper then shows two examples of
93c7d7cd-97b4-405c-b551-f09cacbd0c8b
StampyAI/alignment-research-dataset/blogs
Blogs
Ngo and Yudkowsky on scientific reasoning and pivotal acts This is a transcript of a conversation between Richard Ngo and Eliezer Yudkowsky, facilitated by Nate Soares (and with some comments from Carl Shulman). This transcript continues the [Late 2021 MIRI Conversations](https://intelligence.org/late-2021-miri-conversations/) sequence, following [Ngo’s view on alignment difficulty](https://www.lesswrong.com/posts/gf9hhmSvpZfyfS34B/ngo-s-view-on-alignment-difficulty).   Color key: | | | | --- | --- | |  Chat by Richard and Eliezer  |  Other chat  |     14. October 4 conversation --------------------------   ### 14.1. Predictable updates, threshold functions, and the human cognitive range   [Ngo][15:05] Two questions which I’d like to ask Eliezer: 1. How strongly does he think that the “shallow pattern-memorisation” abilities of GPT-3 are evidence for Paul’s view over his view (if at all) 2. How does he suggest we proceed, given that he thinks directly explaining his model of the chimp-human difference would be the wrong move? [Yudkowsky][15:07] 1 – I’d say that it’s some evidence for the Dario viewpoint which seems close to the Paul viewpoint.  I say it’s some evidence for the Dario viewpoint because Dario seems to be the person who made something like an advance prediction about it.  It’s not enough to make me believe that you can straightforwardly extend the GPT architecture to 3e14 parameters and train it on 1e13 samples and get human-equivalent performance. [Ngo][15:09] Did you make any advance predictions, around the 2008-2015 period, of what capabilities we’d have before AGI? [Yudkowsky][15:10] not especially that come to mind?  on my model of the future this is not particularly something I am supposed to know unless there is a rare flash of predictability. [Ngo][15:11] > > 1 – I’d say that it’s some evidence for the Dario viewpoint which seems close to the Paul viewpoint. I say it’s some evidence for the Dario viewpoint because Dario seems to be the person who made something like an advance prediction about it. It’s not enough to make me believe that you can straightforwardly extend the GPT architecture to 3e14 parameters and train it on 1e13 samples and get human-equivalent performance. > > > For the record I remember Paul being optimistic about language when I visited OpenAI in summer 2018. But I don’t know how advanced internal work on GPT-2 was by then. [Yudkowsky][15:13] 2 – in lots of cases where I learned more specifics about X, and updated about Y, I had the experience of looking back and realizing that knowing *anything* specific about X would have predictably produced a directional update about Y.  like, knowing anything in particular about how the first AGI eats computation, would cause you to update far away from thinking that biological analogies to the computation consumed by humans were a good way to estimate how many computations an AGI needs to eat.  you know lots of details about how humans consume watts of energy, and you know lots of details about how modern AI consumes watts, so it’s very visible that these quantities are so incredibly different and go through so many different steps that they’re basically unanchored from each other. I have specific ideas about how you get AGI that isn’t just scaling up Stack More Layers, which lead me to think that the way to estimate the computational cost of it is not “3e14 parameters trained at 1e16 ops per step for 1e13 steps, because that much computation and parameters seems analogous to human biology and 1e13 steps is given by past scaling laws”, a la recent OpenPhil publication.  But it seems to me that it should be possible to have the abstract insight that knowing more about general intelligence in AGIs or in humans would make the biological analogy look less plausible, because you wouldn’t be matching up an unknown key to an unknown lock. Unfortunately I worry that this depends on some life experience with actual discoveries to get something this abstract-sounding on a gut level, because people basically never seem to make abstract updates of this kind when I try to point to them as predictable directional updates? But, in principle, I’d hope there would be aspects of this where I could figure out how to show that *any* knowledge of specifics would probably update you in a predictable direction, even if it doesn’t seem best for Earth for me to win that argument by giving specifics conditional on those specifics actually being correct, and it doesn’t seem especially sound to win that argument by giving specifics that are wrong. [Ngo][15:17] I’m confused by this argument. Before I thought much about the specifics of the chimpanzee-human transition, I found the argument “humans foomed (by biological standards) so AIs will too” fairly compelling. But after thinking more about the specifics, it seems to me that the human foom was in part caused by a factor (sharp cultural shift) that won’t be present when we train AIs. [Yudkowsky][15:17] sure, and other factors will be present in AIs but not in humans [Ngo][15:17] This seems like a case where more specific knowledge updated me away from your position, contrary to what you’re claiming. [Yudkowsky][15:18] eg, human brains don’t scale and mesh, while it’s far more plausible that with AI you could just run more and more of it that’s a huge factor leading one to expect AI to scale faster than human brains did it’s like communication between humans, but squared! this is admittedly a specific argument and I’m not sure how it would abstract out to any specific argument [Ngo][15:20] Again, this is an argument that I believed less after looking into the details, because right now it’s pretty difficult to throw more compute at neural networks at runtime. Which is not to say that it’s a bad argument, the differences in compute-scalability between humans and AIs are clearly important. But I’m confused about the structure of your argument that knowing more details will predictably update me in a certain direction. [Yudkowsky][15:21] I suppose the genericized version of my actual response to that would be, “architectures that have a harder time eating more compute are architectures which, for this very reason, are liable to need better versions invented of them, and this in particular seems like something that plausibly happens before scaling to general intelligence is practically possible” [Soares][15:23] (Eliezer, I see Richard as requesting that you either back down from, or clarify, your claim that any specific observations about how much compute AI systems require will update him in a predictable direction.) | | | --- | | [Ngo: 👍] | [Yudkowsky][15:24] I’m not saying I know how to make that abstractized argument for exactly what Richard cares about, in part because I don’t understand Richard’s exact model, just that it’s one way to proceed past the point where the obvious dilemma crops up of, “If a theory about AGI capabilities is true, it is a disservice to Earth to speak it, and if a theory about AGI capabilities is false, an argument based on it is not sound.” [Ngo][15:25] Ah, I see. [Yudkowsky][15:26] possible viewpoint to try: that systems in general often have threshold functions as well as smooth functions inside them. only in ignorance, then, do we imagine that the whole thing is one smooth function. the history of humanity has a threshold function of, like, communication or culture or whatever. the correct response to this is not, “ah, so this was the unique, never-to-be-seen-again sort of fact which cropped up in the weirdly complicated story of humanity in particular, which will not appear in the much simpler story of AI” this only sounds plausible because you don’t know the story of AI so you think it will be a simple story the correct generalization is “guess some weird thresholds will also pop up in whatever complicated story of AI will appear in the history books” [Ngo][15:28] Here’s a quite general argument about why we shouldn’t expect too many threshold functions in the impact of AI: because at any point, humans will be filling in the gaps of whatever AIs can’t do. (The lack of this type of smoothing is, I claim, why culture was a sharp threshold for humans – if there had been another intelligent species we could have learned culture from, then we would have developed more gradually.) [Yudkowsky][15:30] something like this indeed appears in my model of why I expect not much impact on GDP before AGI is powerful enough to bypass human economies entirely during the runup phase, pre-AGI won’t be powerful to do “whole new things” that depend on doing lots of widely different things that humans can’t do just marginally new things that depend on doing one thing humans can’t do, or can do but a bunch worse [Ngo][15:31] Okay, that’s good to know. Would this also be true in [a civilisation of village idiots](https://www.lesswrong.com/posts/gf9hhmSvpZfyfS34B/ngo-s-view-on-alignment-difficulty)? [Yudkowsky][15:32] there will be sufficient economic reward for building out industries that are mostly human plus one thing that pre-AGI does, and people will pocket those economic rewards, go home, and not be more ambitious than that.  I have trouble empathically grasping *why* almost all the CEOs are like this in our current Earth, because I am very much not like that myself, but observationally, the current Earth sure does seem to behave like rich people would almost uniformly rather not rock the boat too much. I did not understand the whole thing about village idiots actually do you want to copy and paste the document, or try rephrasing the argument? [Ngo][15:35] Rephrasing: Claim 1: AIs will be better at doing scientific research (and other similar tasks) than village idiots, before we reach AGI. Claim 2: Village idiots still have the core of general intelligence (which you claim chimpanzees don’t have). Claim 3: It would be surprising if narrow AI’s research capabilities fell specifically into the narrow gap between village idiots and Einsteins, given that they’re both general intelligences and are very similar in terms of architecture, algorithms, etc. (If you deny claim 2, then we can substitute, say, someone at the 10th percentile of human intelligence – I don’t know what specific connotations “village idiot” has to you.) [Yudkowsky][15:37] My models do not have an easy time of visualizing “as generally intelligent as a chimp, but specialized to science research, gives you superhuman scientific capability and the ability to make progress in novel areas of science”. (this is a reference back to the pre-rephrase in the document) it seems like, I dunno, “gradient descent can make you generically good at anything without that taking too much general intelligence” must be a core hypothesis there? [Ngo][15:39] I mean, we both agree that gradient descent can produce *some* capabilities without also producing much general intelligence. But claim 1 plus your earlier claims that narrow AIs won’t surpass humans at scientific research, lead to the implication that the limitations of gradient-descent-without-much-general-intelligence fall in a weirdly narrow range. [Yudkowsky][15:42] I do credit the Village Idiot to Einstein Interval with being a little broader as a target than I used to think, since the Alpha series of Go-players took a couple of years to go from pro to world-beating even once they had a scalable algorithm.  Still seems to me that, over time, the wall clock time to traverse those ranges has been getting shorter, like Go taking less time than Chess.  My intuitions still say that it’d be quite weird to end up hanging out for a long time with AGIs that conduct humanlike conversations and are ambitious enough to run their own corporations while those AGIs are still not much good at science. But on my present model, I suspect the limitations of “gradient-descent-without-much-general-intelligence” to fall underneath the village idiot side? [Ngo][15:43] Oh, interesting. That seems like a strong prediction [Yudkowsky][15:43] Your model, as I understand it, is saying, “But surely, GD-without-GI must suffice to produce better scientists than village idiots, by specializing chimps on science” and my current reply, though it’s not a particular question I’ve thought a lot about before, is, “That… does not quite seem to me like a thing that should happen along the mainline?” though, as always, in the limit of superintelligences doing things, or our having the Textbook From The Future, we could build almost any kind of mind on purpose if we knew how, etc. [Ngo][15:44] For example, I expect that if I prompt GPT-3 in the right way, it’ll say some interesting and not-totally-nonsensical claims about advanced science. Whereas it would be very hard to prompt a village idiot to do the same. [Yudkowsky][15:44] eg, a superintelligence could load up chimps with lots of domain-specific knowledge they were not generally intelligent enough to learn themselves. ehhhhhh, it is *not* clear to me that GPT-3 is better than a village idiot at advanced science, even in this narrow sense, especially if the village idiot is allowed some training [Ngo][15:46] It’s not clear to me either. But it does seem plausible, and then it seems even more plausible that this will be true of GPT-4 [Yudkowsky][15:46] I wonder if we’re visualizing different village idiots my choice of “village idiot” originally was probably not the best target for visualization, because in a lot of cases, a village idiot – especially the stereotype of a village idiot – is, like, a damaged general intelligence with particular gears missing? [Ngo][15:47] I’d be happy with “10th percentile intelligence” [Yudkowsky][15:47] whereas it seems like what you want is something more like “Homo erectus but it has language” oh, wow, 10th percentile intelligence? that’s super high GPT-3 is far far out of its league [Ngo][15:49] I think GPT-3 is far below this person’s league in a lot of ways (including most common-sense reasoning) but I become much less confident when we’re talking about abstract scientific reasoning. [Yudkowsky][15:51] I think that if scientific reasoning were as easy as you seem to be imagining(?), the publication factories of the modern world would be *much* more productive of real progress. [Ngo][15:51] Well, a 10th percentile human is very unlikely to contribute to real scientific progress either way [Yudkowsky][15:53] Like, on my current model of how the world really works, China pours vast investments into universities and sober-looking people with PhDs and classes and tests and postdocs and journals and papers; but none of this is the real way of Science which is actually, secretly, unbeknownst to China, passed down in rare lineages and apprenticeships from real scientist mentor to real scientist student, and China doesn’t have much in the way of lineages so the extra money they throw at stuff doesn’t turn into real science. [Ngo][15:52] Can you think of any clear-cut things that they could do and GPT-3 can’t? [Yudkowsky][15:53] Like… make sense… at all?  Invent a handaxe when nobody had ever seen a handaxe before? [Ngo][15:54] You’re claiming that 10th percentile humans invent handaxes? [Yudkowsky][15:55] The activity of rearranging scientific sentences into new plausible-sounding paragraphs is well within the reach of publication factories, in fact, they often use considerably more semantic sophistication than that, and yet, this does not cumulate into real scientific progress even in quite large amounts. I think GPT-3 is basically just Not Science Yet to a much greater extent than even these empty publication factories. If 10th percentile humans don’t invent handaxes, GPT-3 sure as hell doesn’t. [Ngo][15:55] I don’t think we’re disagreeing. Publication factories are staffed with people who do better academically than 90+% of all humans. If 90th-percentile humans are very bad at science, then of course GPT-3 and 10th-percentile humans are very very bad at science. But it still seems instructive to compare them (e.g. on tasks like “talk cogently about a complex abstract topic”) [Yudkowsky][15:58] I mean, while it is usually weird for something to be barely within a species’s capabilities while being within those capabilities at all, such that only relatively smarter individual organisms can do it, in the case of something that a social species has only very recently started to do collectively, it’s plausible that the thing appeared at the point where it was barely accessible to the smartest members.  Eg, it wouldn’t be surprising if it would have taken a long time or forever for humanity to invent science from scratch, if all the Francis Bacons and Newtons and even average-intelligence people were eliminated leaving only the bottom 10%.  Because our species just started doing that, at the point where our species was barely able to start doing that, meaning, at the point where some rare smart people could spearhead it, historically speaking.  It’s not obvious whether or not less smart people can do it over a longer time. I’m not sure we disagree much about the human part of this model. My guess is that our disagreement is more about GPT-3. “Talk ‘cogently’ about a complex abstract topic” doesn’t seem like much of anything significant to me, if GPT-3 is ‘cogent’.  It fails to pass the threshold for inventing science and, I expect, for most particular sciences. [Ngo][16:00] How much training do you think a 10th-percentile human would need in a given subject matter (say, economics) before they could answer questions as well as GPT-3 can? (Right now I think GPT-3 does better by default because it at least recognises the terminology, whereas most humans don’t at all.) [Yudkowsky][16:01] I also expect that if you offer a 10th-percentile human lots of money, they can learn to talk more cogently than GPT-3 about narrower science areas.  GPT-3 is legitimately more well-read at its lower level of intelligence, but train the 10-percentiler in a narrow area and they will become able to write better nonsense about that narrow area. [Ngo][16:01] This sounds like an experiment we can actually run. [Yudkowsky][16:02] Like, what we’ve got going on here is a real *breadth* advantage that GPT-3 has in some areas, but the breadth doesn’t add up because it lacks the depth of a 10%er. [Ngo][16:02] If we asked them to read a single introductory textbook and then quiz both them and GPT-3 about items covered in that textbook, do you expect that the human would come out ahead? [Yudkowsky][16:02] AI has figured out how to do a subhumanly shallow kind of thinking, and it *is* to be expected that when AI can do anything at all, it can soon do more of that thing than the whole human species could do. No, that’s nothing remotely like giving the human the brief training the human needs to catch up to GPT-3’s longer training. A 10%er does not learn in an instant – they learn faster than GPT-3, but not in an instant. This is more like a scenario of paying somebody to, like, sit around for a year with an editor, learning how to mix-and-match economics sentences until they can learn to sound more like they’re making an argument than GPT-3 does, despite still not understanding any economics. A lot of the learning would just go into producing sensible-sounding nonsense at all, since lots of 10%ers have not been to college and have not learned how to regurgitate rearranged nonsense for college teachers. [Ngo][16:05] What percentage of humans do you think could learn to beat GPT-3’s question-answering by reading a single textbook over, say, a period of a month? [Yudkowsky][16:06] ¯\\_(ツ)\_/¯ [Ngo][16:06] More like 0.5 or 5 or 50? [Yudkowsky][16:06] Humans cannot in general pass the Turing Test for posing as AIs! What percentage of humans can pass as a calculator by reading an arithmetic textbook? Zero! [Ngo][16:07] I’m not asking them to mimic GPT-3, I’m asking them to produce better answers. [Yudkowsky][16:07] Then it depends on what kind of answers! I think a lot of 10%ers could learn to do wedding-cake multiplication, if sufficiently well-paid as adults rather than being tortured in school, out to 6 digits, thus handily beating the current GPT-3 at ‘multiplication’. [Ngo][16:08] For example: give them an economics textbook to study for a month, then ask them what inflation is, whether it goes up or down if the government prints more money, whether the price of something increases or decreases when the supply increases. [Yudkowsky][16:09] GPT-3 did not learn to produce its responses by reading *textbooks*. You’re not matching the human’s data to GPT-3’s data. [Ngo][16:10] I know, this is just the closest I can get in an experiment that seems remotely plausible to actually run. [Yudkowsky][16:10] You would want to collect, like, 1,000 Reddit arguments about inflation, and have the human read that, and have the human produce their own Reddit arguments, and have somebody tell them whether they sounded like real Reddit arguments or not. The textbook is just not the same thing at all. I’m not sure we’re at the core of the argument, though. To me it seems like GPT-3 is allowed to be superhuman at producing remixed and regurgitated sentences about economics, because this is about as relevant to Science talent as a calculator being able to do perfect arithmetic, only less so. [Ngo][16:15] Suppose that the remixed and regurgitated sentences slowly get more and more coherent, until GPT-N can debate with a professor of economics and sustain a reasonable position. [Yudkowsky][16:15] Are these points that GPT-N read elsewhere on the Internet, or are they new good points that no professor of economics on Earth has ever made before? [Ngo][16:15] I guess you don’t expect this to happen, but I’m trying to think about what experiments we could run to get evidence for or against it. The latter seems both very hard to verify, and also like a very high bar – I’m not sure if most professors of economics have generated new good arguments that no other professor has ever made before. So I guess the former. [Yudkowsky][16:18] Then I think that you can do this without being able to do science.  It’s a lot like if somebody with a really good memory was lucky enough to have read that exact argument on the Internet yesterday, and to have a little talent for paraphrasing.  Not by coincidence, having this ability gives you – on my model – no ability to do science, invent science, be the first to build handaxes, or design nanotechnology. I admit, this does reflect my personal model of how Science works, presumably not shared by many leading bureaucrats, where in fact the papers full of regurgitated scientific-sounding sentences are not accomplishing much. [Ngo][16:20] So it seems like your model doesn’t rule out narrow AIs producing well-reviewed scientific papers, since you don’t trust the review system very much. [Yudkowsky][16:23] I’m trying to remember whether or not I’ve heard of that happening, like, 10 years ago. My vague recollection is that things in the Sokal Hoax genre where the submissions succeeded, used humans to hand-generate the nonsense rather than any submissions in the genre having been purely machine-generated. [Ngo][16:24] Which doesn’t seem like an unreasonable position, but it does make it harder to produce tests that we have opposing predictions on. [Yudkowsky][16:24] Obviously, that doesn’t mean it couldn’t have been done 10 years ago, because 10 years ago it’s plausibly a lot easier to hand-generate passing nonsense than to write an AI program that does it. oh, wait, I’m wrong! <https://news.mit.edu/2015/how-three-mit-students-fooled-scientific-journals-0414> > > In April of 2005 the team’s submission, “Rooter: A Methodology for the Typical Unification of Access Points and Redundancy,” was accepted as a non-reviewed paper to the World Multiconference on Systemics, Cybernetics and Informatics (WMSCI), a conference that Krohn says is known for “being spammy and having loose standards.” > > >   > > in 2013 IEEE and Springer Publishing removed more than 120 papers from their sites after a French researcher’s analysis determined that they were generated via SCIgen > > > [Ngo][16:26] Oh, interesting Meta note: I’m not sure where to take the direction of the conversation at this point. Shall we take a brief break? [Yudkowsky][16:27] > > The creators continue to get regular emails from computer science students proudly linking to papers they’ve snuck into conferences, as well as notes from researchers urging them to make versions for other disciplines. > > > Sure! Resume 5p? [Ngo][16:27] Yepp   ### 14.2. Domain-specific heuristics and nanotechnology   [Soares][16:41] A few takes: 1. It looks to me like there’s some crux in “how useful will the ‘shallow’ stuff get before dangerous things happen”. I would be unsurprised if this spiraled back into the gradualness debate. I’m excited about attempts to get specific and narrow disagreements in this domain (not necessarily bettable; I nominate distilling out specific disagreements before worrying about finding bettable ones). 2. It seems plausible to me we should have some much more concrete discussion about possible ways things could go right, according to Richard. I’d be up for playin the role of beeping when things seem insufficiently concrete. 3. It seems to me like Richard learned a couple things about Eliezer’s model in that last bout of conversation. I’d be interested to see him try to paraphrase his current understanding of it, and to see Eliezer produce beeps where it seems particularly off. [Yudkowsky][17:00] ![👋](https://s.w.org/images/core/emoji/14.0.0/72x72/1f44b.png) [Ngo][17:02] Hmm, I’m not sure that I learned too much about Eliezer’s model in this last round. [Soares][17:03] (dang :-p) [Ngo][17:03] It seems like Eliezer thinks that the returns of scientific investigation are very heavy-tailed. Which does seem pretty plausible to me. But I’m not sure how useful this claim is for thinking about the development of AI that can do science. I attempted in my document to describe some interventions that would help things go right. And the levels of difficulty involved. [Yudkowsky][17:07] (My model is something like: there are some very shallow steps involved in doing science, lots of medium steps, occasional very deep steps, assembling the whole thing into Science requires having all the lego blocks available.  As soon as you look at anything with details, it ends up ‘heavy-tailed’ because it has multiple pieces and says how things don’t work if all the pieces aren’t there.) [Ngo][17:08] Eliezer, do you have an estimate of how much slower science would proceed if everyone’s IQs were shifted down by, say, 30 points? [Yudkowsky][17:10] It’s not obvious to me that science proceeds significantly past its present point.  I would not have the right to be surprised if Reality told me the correct answer was that a civilization like that just doesn’t reach AGI, ever. [Ngo][17:12] Doesn’t your model take a fairly big hit from predicting that humans just happen to be within 30 IQ points of not being able to get any more science? It seems like a surprising coincidence. Or is this dependent on the idea that doing science is much harder now than it used to be? And so if we’d been dumber, we might have gotten stuck before newtonian mechanics, or else before relativity? [Yudkowsky][17:13] No, humanity is exactly the species that finds it barely possible to do science. [Ngo][17:14] It seems to me like humanity is exactly the species that finds it barely possible to do *civilisation*. [Yudkowsky][17:14] If it were possible to do it with less intelligence, we’d be having this conversation over the Internet that we’d developed with less intelligence. [Ngo][17:15] And it seems like many of the key inventions that enabled civilisation weren’t anywhere near as intelligence-bottlenecked as modern science. [Yudkowsky][17:15] Yes, it does seem that there’s quite a narrow band between “barely smart enough to develop agriculture” and “barely smart enough to develop computers”! Though there were genuinely fewer people in the preagricultural world, with worse nutrition and no Ashkenazic Jews, and there’s the whole question about to what degree the reproduction of the shopkeeper class over several centuries was important to the Industrial Revolution getting started. [Ngo][17:15] (e.g. you’d get better spears or better plows or whatever just by tinkering, whereas you’d never get relativity just by tinkering) [Yudkowsky][17:17] I model you as taking a lesson from this which is something like… you can train up a villager to be John von Neumann by spending some evolutionary money on giving them science-specific brain features, since John von Neumann couldn’t have been much more deeply or generally intelligent, and you could spend even more money and make a chimp a better scientist than John von Neumann. My model is more like, yup, the capabilities you need to invent aqueducts sure do generalize the crap out of things, though also at the upper end of cognition there are compounding returns which can bring John von Neumann into existence, and also also there’s various papers suggesting that selection was happening really fast over the last few millennia and real shifts in cognition shouldn’t be ruled out.  (This last part is an update to what I was thinking when I wrote [Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf), and is from my own perspective a more gradualist line of thinking, because it means there’s a wider actual target to traverse before you get to von Neumann.) [Ngo][17:20] It’s not that “von Neumann isn’t much more deeply generally intelligent”, it’s more like “domain-specific heuristics and instincts get you a long way”. E.g. soccer is a domain where spending evolutionary money on specific features will very much help you beat von Neumann, and so is art, and so is music. [Yudkowsky][17:20] My skepticism here is that there’s a version of, like, “invent nanotechnology” which routes through just the shallow places, which humanity stumbles over before we stumble over deep AGI. [Ngo][17:21] Would you be comfortable publicly discussing the actual cognitive steps which you think would be necessary for inventing nanotechnology? [Yudkowsky][17:23] It should not be overlooked that there’s a very valid sibling of the old complaint “Anything you can do ceases to be AI”, which is that “Things you can do with surprisingly-to-your-model shallow cognition are precisely the things that Reality surprises you by telling you that AI can do earlier than you expected.”  When we see GPT-3, we were getting some amount of real evidence about AI capabilities advancing faster than I expected, and some amount of evidence about GPT-3’s task being performable using shallower cognition than expected. Many people were particularly surprised by Go because they thought that Go was going to require deeper real thought than chess. And I think AlphaGo probably was thinking in a legitimately deeper way than Deep Blue.  Just not as much deeper as Douglas Hofstadter thought it would take. Conversely, people thought a few years ago that driving cars really seemed to be the sort of thing that machine learning would be good at, and were unpleasantly surprised by how the last 0.1% of driving conditions were resistant to shallow techniques. Despite the inevitable fact that some surprises of this kind now exist, and that more such surprises will exist in the future, it continues to seem to me that science-and-engineering on the level of “invent nanotech” still seems pretty unlikely to be easy to do with shallow thought, by means that humanity discovers before AGI tech manages to learn deep thought? What actual cognitive steps?  Outside-the-box thinking, throwing away generalizations that governed your previous answers and even your previous questions, inventing new ways to represent your questions, figuring out which questions you need to ask and developing plans to answer them; these are some answers that I hope will be sufficiently useless to AI developers that it is safe to give them, while still pointing in the direction of things that have an un-GPT-3-like quality of depth about them. Doing this across unfamiliar domains that couldn’t be directly trained in by gradient descent because they were too expensive to simulate a billion examples of If you have something this powerful, why is it not also noticing that the world contains humans?  Why is it not noticing itself? [Ngo][17:30] If humans were to invent this type of nanotech, what do you expect the end intellectual result to be? E.g. consider the human knowledge involved in building cars There are thousands of individual parts, each of which does a specific thing [Yudkowsky][17:30] Uhhhh… is there a reason why “Eric Drexler’s *Nanosystems* but, like, the real thing, modulo however much Drexler did not successfully Predict the Future about how to do that, which was probably a lot” is not the obvious answer here? [Ngo][17:31] And some deep principles governing engines, but not really very crucial ones to actually building (early versions of) those engines [Yudkowsky][17:31] that’s… not historically true at all? getting a grip on quantities of heat and their flow was *critical* to getting steam engines to work it didn’t happen until the math was there [Ngo][17:32] Ah, interesting [Yudkowsky][17:32] maybe you can be a mechanic banging on an engine that somebody else designed, around principles that somebody even earlier invented, without a physics degree but, like, engineers have actually needed math since, like, that’s been a thing, it wasn’t just a prestige trick [Ngo][17:34] Okay, so you expect there to be a bunch of conceptual work in finding equations which govern nanosystems. > > Uhhhh… is there a reason why “Eric Drexler’s *Nanosystems* but, like, the real thing, modulo however much Drexler did not successfully Predict the Future about how to do that, which was probably a lot” is not the obvious answer here? > > > This may in fact be the answer; I haven’t read it though. [Yudkowsky][17:34] or other abstract concepts than equations, which have never existed before like, maybe not with a type signature unknown to humanity, but with specific instances unknown to present humanity that’s what I’d expect to see from humanly designed nanosystems [Ngo][17:35] So something like AlphaFold is only doing a very small proportion of the work here, since it’s not able to generate new abstract concepts (of the necessary level of power) [Yudkowsky][17:35] yeeeessss, that is why DeepMind did not take over the world last year it’s not just that AlphaFold lacks the concepts but that it lacks the machinery to invent those concepts and the machinery to do anything with such concepts [Ngo][17:38] I think I find this fairly persuasive, but I also expect that people will come up with increasingly clever ways to leverage narrow systems so that they can do more and more work. (including things like: if you don’t have enough simulations, then train another narrow system to help fix that, etc) [Yudkowsky][17:39] (and they will accept their trivial billion-dollar-payouts and World GDP will continue largely undisturbed, on my mainline model, because it will be easiest to find ways to make money by leveraging narrow systems on the less regulated, less real parts of the economy, instead of trying to build houses or do medicine, etc.) real tests being expensive, simulation being impossibly expensive, and not having enough samples to train your civilization’s current level of AI technology, is not a problem you can solve by training a new AI to generate samples, because you do not have enough samples to train your civilization’s current level of AI technology to generate more samples [Ngo][17:41] Thinking about nanotech makes me more sympathetic to the argument that developing general intelligence will bring a sharp discontinuity. But it also makes me expect longer timelines to AGI, during which there’s more time to do interesting things with narrow AI. So I guess it weighs more against Dario’s view, less against Paul’s view. [Yudkowsky][17:41] well, I’ve been debating Paul about that separately in the timelines channel, not sure about recapitulating it here but in broad summary, since I expect the future to look like it was drawn from the “history book” barrel and not the “futurism” barrel, I expect huge barriers to doing *huge* things with narrow AI in small amounts of time; you can sell waifutech because it’s unregulated and hard to regulate, but that doesn’t feed into core mining and steel production. we could already have double the GDP if it was legal to build houses and hire people, etc., and the change brought by pre-AGI will perhaps be that our GDP could *quadruple* instead of just *double* if it was legal to do things, but that will not make it legal to do things, and why would anybody try to do things and probably fail when there are easier $36 billion profits to be made in waifutech.   ### 14.3. Relatively shallow cognition, Go, and math   [Ngo][17:45] I’d be interested to see Paul’s description of how we would train AIs to solve hard scientific problems. I think there’s some prediction that’s like “we train it on arxiv and fine-tune it until it starts to output credible hypotheses about nanotech”. And this seems like it has a step that’s quite magical to me, but perhaps that’ll be true of any prediction that I make before fully understanding how intelligence works. [Yudkowsky][17:46] my belief is not so much that this training can never happen, but that this probably means the system was trained *beyond the point of safe shallowness* not in principle over all possible systems a superintelligence could build, but in practice when it happens on Earth my only qualm about this is that current techniques make it possible to buy shallowness in larger quantities than this Earth has ever seen before, and people are looking for surprising ways to make use of that so I weigh in my mind the thought of Reality saying Gotcha! by handing me a headline I read tomorrow about how GPT-4 has started producing totally reasonable science papers that are actually correct and I am pretty sure that exact thing doesn’t happen and I ask myself about GPT-5 in a few more years, which had the same architecture as GPT-3 but more layers and more training, doing the same thing and it’s still largely “nope” then I ask myself about people in 5 years being able to use the shallow stuff *in any way whatsoever* to produce the science papers and of course the answer there is, “okay, but is it doing that without having shallowly learned stuff *that adds up to deep stuff* which is *why it can now do science*“ and I try saying back “no, it was born of shallowness and it remains shallow and it’s just doing science because it turns out that there is totally a way to be an incredibly mentally shallow skillful scientist if you think 10,000 shallow thoughts per minute instead of 1 deep thought per hour” and my brain is like, “I cannot absolutely rule it out but it really seems like trying to call the next big surprise in 2014 and you guess self-driving cars instead of Go because how the heck would you guess that Go was shallower than self-driving cars” like, that is an *imaginable* surprise [Ngo][17:52] On that *particular* point it seems like the very reasonable heuristic of “pick the most similar task” would say that go is like chess and therefore you can do it shallowly. [Yudkowsky][17:52] but there’s a world of difference between saying that a surprise is imaginable, and that it wouldn’t surprise you [Ngo][17:52] I wasn’t thinking that much about AI at that point, so you’re free to call that post-hoc. [Yudkowsky][17:52] the Chess techniques had already failed at Go actual new techniques were required the people around at the time had witnessed sudden progress on self-driving cars a few years earlier [Ngo][17:53] My advance prediction here is that “math is like go and therefore can be done shallowly”. [Yudkowsky][17:53] self-driving cars were of obviously greater economic interest as well my recollection is that talk of the time was about self-driving heh! I have the same sense. that is, math being shallower than science. though perhaps not as shallow as Go, and you will note that Go has fallen and Math has not [Ngo][17:54] right I also expect that we’ll need new techniques for math (although not as different from the go techniques as the go techniques were from chess techniques) But I guess we’re not finding strong disagreements here either. [Yudkowsky][17:57] if Reality came back and was like “Wrong! Keeping up with the far reaches of human mathematics is harder than being able to develop your own nanotech,” I would be like “What?” to about the same degree as being “What?” on “You can build nanotech just by thinking trillions of thoughts that are too shallow to notice humans!” [Ngo][17:58] Perhaps let’s table this topic and move on to one of the others Nate suggested? I’ll note that walking through the steps required to invent a science of nanotechnology does make your position feel more compelling, but I’m not sure how much of that is the general “intelligence is magic” intuition I mentioned before. [Yudkowsky][17:59] How do you suspect your beliefs would shift if you had any detailed model of intelligence? Consider trying to imagine a particular wrong model of intelligence and seeing what it would say differently? (not sure this is a useful exercise and we could indeed try to move on) [Ngo][18:01] I think there’s one model of intelligence where scientific discovery is more actively effortful – as in, you need to be very goal-directed in determining hypotheses, testing hypotheses, and so on. And there’s another in which scientific discovery is more constrained by flashes of insight, and the systems which are producing those flashes of insight are doing pattern-matching in a way that’s fairly disconnected from the real-world consequences of those insights. [Yudkowsky][18:05] The first model is true and the second one is false, if that helps.  You can tell this by contemplating where you would update if you learned any model, by considering that things look more disconnected when you can’t see the machinery behind them.  If you don’t know what moves the second hand on a watch and the minute hand on a watch, they could just be two things that move at different rates for completely unconnected reasons; if you can see inside the watch, you’ll see that the battery is shared and the central timing mechanism is shared and then there’s a few gears to make the hands move at different rates. Like, in my ontology, the notion of “effortful” doesn’t particularly parse as anything basic, because it doesn’t translate over into paperclip maximizers, which are neither effortful nor effortless. But in a human scientist you’ve got thoughts being shoved around by all sorts of processes behind the curtains, created by natural selection, some of them reflecting shards of Consequentialism / shadowing paths through time The flashes of insight come to people who were looking in nonrandom places If they didn’t plan deliberately and looked on pure intuition, they looked with an intuition trained by past success and failure Somebody walking doesn’t plan to walk, but long ago as a baby they learned from falling over, and their ancestors who fell over more didn’t reproduce [Ngo][18:09] I think the first model is probably more true for humans in the domain of science. But I’m uncertain about the extent to which this because humans have not been optimised very much for doing science. If we consider the second model in a domain that humans have actually been optimised very hard for (say, physical activity) – then maybe we can use the analogy of a coach and a player. The coach can tell the player what to practice, but almost all the work is done by the player practicing in a way which updates their intuitions. This has become very abstract, though.   ### 14.4. Pivotal acts and historical precedents   [Ngo][18:11] > > A few takes: > > > 1. It looks to me like there’s some crux in “how useful will the ‘shallow’ stuff get before dangerous things happen”. I would be unsurprised if this spiraled back into the gradualness debate. I’m excited about attempts to get specific and narrow disagreements in this domain (not necessarily bettable; I nominate distilling out specific disagreements before worrying about finding bettable ones). > > > 2. It seems plausible to me we should have some much more concrete discussion about possible ways things could go right, according to Richard. I’d be up for playin the role of beeping when things seem insufficiently concrete. > > > 3. It seems to me like Richard learned a couple things about Eliezer’s model in that last bout of conversation. I’d be interested to see him try to paraphrase his current understanding of it, and to see Eliezer produce beeps where it seems particularly off. > > > Here’s Nate’s comment. We could try his #2 suggestion: concrete ways that things could go right. [Soares][18:12] (I am present and am happy to wield the concreteness-hammer) [Ngo][18:13] I think I’m a little cautious about this line of discussion, because my model doesn’t strongly constrain the ways that different groups respond to increasing developments in AI. The main thing I’m confident about is that there will be much clearer responses available to us once we have a better picture of AI development. E.g. before modern ML, the option of international constraints on compute seemed much less salient, because algorithmic developments seemed much more important. Whereas now, tracking/constraining compute use seems like one promising avenue for influencing AGI development. Or in the case of nukes, before knowing the specific details about how they were constructed, it would be hard to give a picture of how arms control goes well. But once you know more details about the process of uranium enrichment, you can construct much more efficacious plans. [Yudkowsky][18:19] Once we knew specific things about bioweapons, countries developed specific treaties for controlling them, which failed (according to @CarlShulman) [Ngo][18:19, moved two down in log] (As a side note, I think that if Eliezer had been around in the 1930s, and you described to him what actually happened with nukes over the next 80 years, he would have called that “insanely optimistic”.) [Yudkowsky][18:21] Mmmmmmaybe.  Do note that I tend to be more optimistic than the average human about, say, global warming, or everything in transhumanism outside of AGI. Nukes have going for them that, in fact, nobody has an incentive to start a global thermonuclear war.  Eliezer is not in fact pessimistic about everything and views his AGI pessimism as generalizing to very few other things, which are not, in fact, as bad as AGI. [Ngo][18:21] I think I put this as the lowest application of competent power out of the things listed in my doc; I’d need to look at the historical details to know if important decision-makers actually cared about it, or were just doing it for PR reasons. [Shulman][18:22] > > Once we knew specific things about bioweapons, countries developed specific treaties for controlling them, which failed (according to @CarlShulman) > > > The treaties were pro forma without verification provisions because the powers didn’t care much about bioweapons. They did have verification for nuclear and chemical weapons which did work. [Yudkowsky][18:22] But yeah, compared to pre-1946 history, nukes actually kind of did go *really surprisingly well!* Like, this planet used to be a huge warring snakepit of Great Powers and Little Powers and then nukes came along and people actually got serious and decided to stop having the largest wars they could fuel. [Shulman][18:22][18:23] The analog would be an international agreement to sign a nice unenforced statement of AI safety principles and then all just building AGI in doomy ways without explicitly saying they’re doing it.. The BWC also allowed ‘defensive’ research that is basically as bad as the offensive kind. [Yudkowsky][18:23] > > The analog would be an international agreement to sign a nice unenforced statement of AI safety principles and then all just building AGI in doomy ways without explicitly saying they’re doing it.. > > > This scenario sure sounds INCREDIBLY PLAUSIBLE, yes [Ngo][18:22] On that point: do either of you have strong opinions about the anthropic shadow argument about nukes? That seems like one reason why the straw 1930s-Eliezer I just cited would have been justified. [Yudkowsky][18:23] I mostly don’t consider the anthropic shadow stuff [Shulman][18:24] In the late Cold War Gorbachev and Reagan might have done the BWC treaty+verifiable dismantling, but they were in a rush on other issues like nukes and collapse of the USSR. Putin just wants to keep his bioweapons program, it looks like. Even denying the existence of the exposed USSR BW program. [Yudkowsky][18:25] I’m happy making no appeal to anthropics here. [Shulman][18:25] Boo anthropic shadow claims. Always dumb. (Sorry I was only invoked for BW, holding my tongue now.) | | | | --- | --- | | [Yudkowsky: ❤] | [Soares: ❤] | [Yudkowsky][18:26] There may come a day when the strength of nonanthropic reasoning fails… but that is not this day! [Ngo][18:27] Okay, happy to rule that out for now too. So yeah, I picture 1930s-Eliezer pointing to technological trends and being like “by default, 30 years after the first nukes are built, you’ll be able to build one in your back yard. And governments aren’t competent enough to stop that happening.” And I don’t think I could have come up with a compelling counterargument back then. [Soares][18:27] > > [Sorry I was only invoked for BW, holding my tongue now.] > > > (fwiw, I thought that when Richard asked “you two” re: anthropic shadow, he meant you also. But I appreciate the caution. And in case Richard meant me, I will note that I agree w/ Carl and Eliezer on this count.) [Ngo][18:28] > > (fwiw, I thought that when Richard asked “you two” re: anthropic shadow, he meant you also. But I appreciate the caution. And in case Richard meant me, I will note that I agree w/ Carl and Eliezer on this count.) > > > Oh yeah, sorry for the ambiguity, I meant Carl. I do believe that AI control will be more difficult than nuclear control, because AI is so much more useful. But I also expect that there will be many more details about AI development that we don’t currently understand, that will allow us to influence it (because AGI is a much more complicated concept than “really really big bomb”). [Yudkowsky][18:29] > > [So yeah, I picture 1930s-Eliezer pointing to technological trends and being like “by default, 30 years after the first nukes are built, you’ll be able to build one in your back yard. And governments aren’t competent enough to stop that happening.” > > > And I don’t think I could have come up with a compelling counterargument back then.] > > > So, I mean, in fact, I don’t prophesize doom from very many trends at all!  It’s literally just AGI that is anywhere near that unmanageable!  Many people in EA are more worried about biotech than I am, for example. [Ngo][18:31] I appreciate that my response is probably not very satisfactory to you here, so let me try to think about more concrete things we can disagree about. [Yudkowsky][18:31] > > [I do believe that AI control will be more difficult than nuclear control, because AI is so much more useful. But I also expect that there will be many more details about AI development that we don’t currently understand, that will allow us to influence it (because AGI is a much more complicated concept than “really really big bomb”).] > > > Er… I think this is not a correct use of the Way I was attempting to gesture at; things being more complicated when known than unknown, does not mean you have more handles to influence them because each complication has the potential to be a handle.  It is not in general true that very complicated things are easier for humanity in general, and governments in particular, to control, because they have so many exposed handles. I think there’s a valid argument about it maybe being more possible to control the supply chain for AI training processors if the global chip supply chain is narrow (also per Carl). [Ngo][18:34] One thing that we seemed to disagree on, to a significant extent, is the difficulty of “US and China preventing any other country from becoming a leader in AI” [Yudkowsky][18:35] It is in fact a big deal about nuclear tech that uranium can’t be mined in every country, as I understand it, and that centrifuges stayed at the frontier of technology and were harder to build outside the well-developed countries, and that the world ended up revolving around a few Great Powers that had no interest in nuclear tech proliferating any further. [Ngo][18:35] It seems to me that the US and/or China could apply a lot of pressure to many countries. [Yudkowsky][18:35] Unfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable. [Ngo][18:35] E.g. if the UK had actually seriously tried to block Google’s acquisition of DeepMind, and the US had actually seriously tried to convince them not to do so, then I expect that the UK would have folded. (Although it’s a weird hypothetical.) > > Unfortunately, before you let that encourage you too much, I would also note it was an important fact about nuclear bombs that they did not produce streams of gold and then ignite the atmosphere if you turned up the stream of gold too high with the actual thresholds involved being unpredictable. > > > Not a critical point, but nuclear power does actually seem like a “stream of gold” in many ways. (also, quick meta note: I need to leave in 10 mins) [Yudkowsky][18:38] I would be a lot more cheerful about a few Great Powers controlling AGI if AGI produced wealth, but more powerful AGI produced no more wealth; if AGI was made entirely out of hardware, with no software component that could be keep getting orders of magnitude more efficient using hardware-independent ideas; and if the button on AGIs that destroyed the world was clearly labeled. That does take AGI to somewhere in the realm of nukes. [Ngo][18:38] How much improvement do you think can be eked out of existing amounts of hardware if people just try to focus on algorithmic improvements? [Yudkowsky][18:38] And Eliezer is capable of being less concerned about things when they are intrinsically less concerning, which is why my history does not, unlike some others in this field, involve me running also being Terribly Concerned about nuclear war, global warming, biotech, and killer drones. [Ngo][18:39] This says 44x improvements over 7 years: <https://openai.com/blog/ai-and-efficiency/> ![](https://images-ext-1.discordapp.net/external/0ZgRbpmbv_D6LHuB59diNlLopn91Ii66xvJmWy8-jTc/https/openai.com/content/images/2020/05/ai-and-efficiency-social.png?width=1200&height=628) [Yudkowsky][18:39] Well, if you’re a superintelligence, you can probably do human-equivalent human-speed general intelligence on a 286, though it might possibly have less fine motor control, or maybe not, I don’t know. [Ngo][18:40] (within reasonable amounts of human-researcher-time – say, a decade of holding hardware fixed) [Yudkowsky][18:40] I wouldn’t be surprised if human ingenuity asymptoted out at AGI on a home computer from 1995. Don’t know if it’d take more like a hundred years or a thousand years to get fairly close to that. [Ngo][18:41] Does this view cash out in a prediction about how the AI and Efficiency graph projects into the future? [Yudkowsky][18:42] The question of how efficiently you can perform a fixed algorithm doing fixed things, often pales compared to the gains on switching to different algorithms doing different things. Given government control of all the neural net training chips and no more public GPU farms, I buy that they could keep a nuke!AGI (one that wasn’t tempting to crank up and had clearly labeled Doom-Causing Buttons whose thresholds were common knowledge) under lock of the Great Powers for 7 years, during which software decreased hardware requirements by 44x.  I am a bit worried about how long it takes before there’s a proper paradigm shift on the level of deep learning getting started in 2006, after which the Great Powers need to lock down on individual GPUs. [Ngo][18:46] Hmm, okay.   ### 14.5. Past ANN progress   [Ngo][18:46] I don’t expect another paradigm shift like that (in part because I’m not sure the paradigm shift actually happened in the first place – it seems like neural networks were improving pretty continuously over many decades) [Yudkowsky][18:47] I’ve noticed that opinion around OpenPhil!  It makes sense if you have short timelines and expect the world to end before there’s another paradigm shift, but OpenPhil doesn’t seem to expect that either. Yeah, uh, there was kinda a paradigm shift in AI between say 2000 and now.  There really, really was. [Ngo][18:49] What I mean is more like: it’s not clear to me that an extrapolation of the trajectory of neural networks is made much better by incorporating data about the other people who weren’t using neural networks. [Yudkowsky][18:49] Would you believe that at one point Netflix ran a prize contest to produce better predictions of their users’ movie ratings, with a $1 million prize, and this was one of the largest prizes ever in AI and got tons of contemporary ML people interested, and neural nets were not prominent on the solutions list at all, because, back then, people occasionally solved AI problems *not using neural nets*? I suppose that must seem like a fairy tale, as history always does, but I lived it! [Ngo][18:50] (I wasn’t denying that neural networks were for a long time marginalised in AI) I’d place much more credence on future revolutions occurring if neural networks had actually only been invented recently. (I have to run in 2 minutes) [Yudkowsky][18:51] The world might otherwise end before the next paradigm shift, but if the world keeps on ticking for 10 years, 20 years, there will not always be the paradigm of training massive networks by even more massive amounts of gradient descent; I do not think that is actually the most efficient possible way to turn computation into intelligence. Neural networks stayed stuck at only a few layers for a long time, because the gradients would explode or die out if you made the networks any deeper. There was a critical moment in 2006(?) where Hinton and Salakhutdinov(?) proposed training Restricted Boltzmann machines unsupervised in layers, and then ‘unrolling’ the RBMs to initialize the weights in the network, and then you could do further gradient descent updates from there, because the activations and gradients wouldn’t explode or die out given that initialization.  That got people to, I dunno, 6 layers instead of 3 layers or something? But it focused attention *on* the problem of exploding gradients as the reason why deeply layered neural nets never worked, and that kicked off the entire modern field of deep learning, more or less. [Ngo][18:56] Okay, so are you claiming that that neural networks were mostly bottlenecked by algorithmic improvements, not compute availability, for a significant part of their history? [Yudkowsky][18:56] If anybody goes back and draws a graph claiming the whole thing was continuous if you measure the right metric, I am not really very impressed unless somebody at the time was using that particular graph and predicting anything like the right capabilities off of it. [Ngo][18:56] If so this seems like an interesting question to get someone with more knowledge of ML history than me to dig into; I might ask around. [Yudkowsky][18:57] > > [Okay, so are you claiming that that neural networks were mostly bottlenecked by algorithmic improvements, not compute availability, for a significant part of their history?] > > > Er… yeah?  There was a long time when, even if you threw a big neural network at something, it just wouldn’t work. Good night, btw? [Ngo][18:57] Let’s call it here; thanks for the discussion. [Soares][18:57] Thanks, both! [Ngo][18:57] I’ll be interested to look into that claim, it doesn’t fit with the impressions I have of earlier bottlenecks. I think the next important step is probably for me to come up with some concrete governance plans that I’m excited about. I expect this to take quite a long time [Soares][18:58] We can coordinate around that later. Sorry for keeping you so late already, Richard. [Ngo][18:59] No worries My proposal would be that we should start on whatever work is necessary to convert the debate into a publicly accessible document now In some sense coming up with concrete governance plans is my full-time job, but I feel like I’m still quite a way behind in my thinking on this, compared with people who have been thinking about governance specifically for longer [Soares][19:01] (@RobBensinger is already on it ![🙂](https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png)) | | | --- | | [Bensinger: ✅] | [Yudkowsky][19:03] Nuclear plants might be like narrow AI in this analogy; some designs potentially contribute to proliferation, and you can get more economic wealth by building more of them, but they have no Unlabeled Doom Dial where you can get more and more wealth out of them by cranking them up until at some unlabeled point the atmosphere ignites. Also a thought: I don’t think you just want somebody with more knowledge of AI history, I think you might need to ask an actual old fogey *who was there at the time*, and hasn’t just learned an ordered history of just the parts of the past that are relevant to the historian’s theory about how the present happened. Two of them, independently, to see if the answers you get are reliable-as-in-statistical-reliability. [Soares][19:19] My own quick take, for the record, is that it looks to me like there are two big cruxes here. One is about whether “deep generality” is a good concept, and in particular whether it pushes AI systems quickly from “nonscary” to “scary” and whether we should expect human-built AI systems to acquire it in practice (before the acute risk period is ended by systems that lack it). The other is about how easy it will be to end the acute risk period (eg by use of politics or nonscary AI systems alone). I suspect the latter is the one that blocks on Richard thinking about governance strategies. I’d be interested in attempting further progress on the former point, though it’s plausible to me that that should happen over in #timelines instead of here.   The post [Ngo and Yudkowsky on scientific reasoning and pivotal acts](https://intelligence.org/2022/03/01/ngo-and-yudkowsky-on-scientific-reasoning-and-pivotal-acts/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
ce690724-3791-4dcb-a958-995179259866
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Probability that other architectures will scale as well as Transformers? GPT-1, 2, and 3 have shown impressive scaling properties. How likely is it that, in the next five years, many other architectures will also be shown to get substantially better as they get bigger? EDIT I am open to discussion of better definitions of the scaling hypothesis. For example, maybe Gwern means something different [here](https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang?commentId=jbD8siv7GMWxRro43) in which case I'm also interested in that.
3b7fb733-2c17-4f0e-be7c-2b2e330c313d
StampyAI/alignment-research-dataset/special_docs
Other
A survey on transfer learning 1 A Survey on Transfer Learning Sinno Jialin Pan and Qiang Y ang Fellow, IEEE Abstract —A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classi fication task in one domain of interest, but we only have suf ficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases,knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data labelingefforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classi fication, regression and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multi-task learning and sample selection bias, as well as co-variate shift. We also explore some potential future issues in transfer learning research. Index Terms —Transfer Learning, Survey, Machine Learning, Data Mining. ✦ 1I NTRODUCTION Data mining and machine learning technologies have already achieved signi ficant success in many knowledge engineering areas including classi fication, regression and clustering (e.g., [1], [2]). However, many machine learning methods work wellonly under a common assumption: the training and test data aredrawn from the same feature space and the same distribution. When the distribution changes, most statistical models need to be rebuilt from scratch using newly collected training data. Inmany real world applications, it is expensive or impossible tore-collect the needed training data and rebuild the models. Itwould be nice to reduce the need and effort to re-collect the training data. In such cases, knowledge transfer ortransfer learning between task domains would be desirable. Many examples in knowledge engineering can be found where transfer learning can truly be bene ficial. One example is Web document classi fication [3], [4], [5], where our goal is to classify a given Web document into several prede fined categories. As an example in the area of Web-documentclassi fication (see, e.g., [6]), the labeled examples may be the university Web pages that are associated with categoryinformation obtained through previous manual-labeling efforts.For a classi fic a t i o nt a s ko nan e w l yc r e a t e dW e bs i t ew h e r et h e data features or data distributions may be different, there maybe a lack of labeled training data. As a result, we may not beable to directly apply the Web-page classi fiers learned on the university Web site to the new Web site. In such cases, it wouldbe helpful if we could transfer the classi fication knowledge into the new domain. The need for transfer learning may arise when the data can be easily outdated. In this case, the labeled data obtained inone time period may not follow the same distribution in alater time period. For example, in indoor WiFi localization Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Clearwater Bay, Kowloon, Hong Kong Emails: {sinnopan, qyang }@cse.ust.hkproblems, which aims to detect a user’s current location based on previously collected WiFi data, it is very expensive tocalibrate WiFi data for building localization models in a large-scale environment, because a user needs to label a large collection of WiFi signal data at each location. However, the WiFi signal-strength values may be a function of time, deviceor other dynamic factors. A model trained in one time periodor on one device may cause the performance for locationestimation in another time period or on another device to bereduced. To reduce the re-calibration effort, we might wish to adapt the localization model trained in one time period (the source domain) for a new time period (the target domain), orto adapt the localization model trained on a mobile device (thesource domain) for a new mobile device (the target domain),as done in [7]. As a third example, consider the problem of sentiment classi fication, where our task is to automatically classify the reviews on a product, such as a brand of camera, into positiveand negative views. For this classi fication task, we need to first collect many reviews of the product and annotate them. We would then train a classi fier on the reviews with their corresponding labels. Since the distribution of review dataamong different types of products can be very different, to maintain good classi fication performance, we need to collect a large amount of labeled data in order to train the review-classi fication models for each product. However, this data- labeling process can be very expensive to do. To reduce the effort for annotating reviews for various products, we may want to adapt a classi fication model that is trained on some products to help learn classi fication models for some other products. In such cases, transfer learning can save a signi ficant amount of labeling effort [8]. In this survey article, we give a comprehensive overview of transfer learning for classi fication, regression and clustering developed in machine learning and data mining areas. Therehas been a large amount of work on transfer learning forreinforcement learning in the machine learning literature (e.g., Digital Object Indentifier 10.1109/TKDE.2009.191 1041-4347/$25.00 © 2009 IEEEIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 2 [9], [10]). However, in this paper, we only focus on transfer learning for classi fication, regression and clustering problems that are related more closely to data mining tasks. By doingthe survey, we hope to provide a useful resource for the datamining and machine learning community. The rest of the survey is organized as follows. In the next four sections, we first give a general overview and de fine some notations we will use later. We then brie fly survey the history of transfer learning, give a uni fied de finition of transfer learning and categorize transfer learning into three differentsettings (given in Table 2 and Figure 2). For each setting, wereview different approaches, given in Table 3 in detail. Afterthat, in Section 6, we review some current research on thetopic of “negative transfer”, which happens when knowledgetransfer has a negative impact on target learning. In Section 7,we introduce some successful applications of transfer learningand list some published data sets and software toolkits fortransfer learning research. Finally, we conclude the article witha discussion of future works in Section 8. 2O VERVIEW 2.1 A Brief History of Transfer Learning Traditional data mining and machine learning algorithms make predictions on the future data using statistical models that aretrained on previously collected labeled or unlabeled trainingdata [11], [12], [13]. Semi-supervised classi fication [14], [15], [16], [17] addresses the problem that the labeled data may be too few to build a good classi fier, by making use of a large amount of unlabeled data and a small amount of labeleddata. Variations of supervised and semi-supervised learningfor imperfect datasets have been studied; for example, Zhuand Wu [18] have studied how to deal with the noisy class-label problems. Yang et al. considered cost-sensitive learning [19] when additional tests can be made to future samples.Nevertheless, most of them assume that the distributions ofthe labeled and unlabeled data are the same. Transfer learning , in contrast, allows the domains, tasks, and distributions used in training and testing to be different. In the real world, weobserve many examples of transfer learning. For example,we may find that learning to recognize apples might help to recognize pears. Similarly, learning to play the electronic organmay help facilitate learning the piano. The study of Transfer learning is motivated by the fact that people can intelligently apply knowledge learned previously to solve new problemsfaster or with better solutions. The fundamental motivationforTransfer learning in the field of machine learning was discussed in a NIPS-95 workshop on “Learning to Learn” 1, which focused on the need for lifelong machine-learning methods that retain and reuse previously learned knowledge. Research on transfer learning has attracted more and more attention since 1995 in different names: learning tolearn, life-long learning, knowledge transfer, inductive trans- fer, multi-task learning, knowledge consolidation, context- sensitive learning, knowledge-based inductive bias, meta learn-ing, and incremental/cumulative learning [20]. Among these, 1. http://socrates.acadiau.ca/courses/comp/dsilver/NIPS95 LTL/ transfer.workshop.1995.htmla closely related learning technique to transfer learning is the multi-task learning framework [21], which tries to learn multiple tasks simultaneously even when they are different.A typical approach for multi-task learning is to uncover thecommon (latent) features that can bene fit each individual task. In 2005, the Broad Agency Announcement (BAA) 05-29 of Defense Advanced Research Projects Agency (DARPA)’sInformation Processing Technology Of fice (IPTO) 2gave a new mission of transfer learning: the ability of a system torecognize and apply knowledge and skills learned in previoustasks to novel tasks. In this de finition, transfer learning aims to extract the knowledge from one or more source tasks and applies the knowledge to a target task . In contrast to multi-task learning, rather than learning all of the source and target taskssimultaneously, transfer learning cares most about the targettask. The roles of the source and target tasks are no longersymmetric in transfer learning. Figure 1 shows the difference between the learning processes of traditional and transfer learning techniques. Aswe can see, traditional machine learning techniques try to learneach task from scratch, while transfer learning techniques tryto transfer the knowledge from some previous tasks to a targettask when the latter has fewer high-quality training data. (a) Traditional Machine Learning (b) Transfer Learning Fig. 1. Different Learning Processes between TraditionalMachine Learning and Transfer Learning Today, transfer learning methods appear in several top venues, most notably in data mining (ACM KDD, IEEE ICDMand PKDD, for example), machine learning (ICML, NIPS,ECML, AAAI and IJCAI, for example) and applications of machine learning and data mining (ACM SIGIR, WWW and ACL for example) 3. Before we give different categorizations of transfer learning, we first describe the notations used in this article. 2.2 Notations and De finitions In this section, we introduce some notations and de finitions that are used in this survey. First of all, we give the de finitions of a “domain” and a “task”, respectively. In this survey, a domain Dconsists of two components: a feature space Xand a marginal probability distribution P(X), where X={x1,...,x n}∈X . For example, if our learning 2. http://www.darpa.mil/ipto/programs/tl/tl.asp 3. We summarize a list of conferences and workshops where transfer learning papers appear in these few years in the following webpage for reference, http://www.cse.ust.hk/ ∼sinnopan/conferenceTL.htmIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 3 task is document classi fication, and each term is taken as a binary feature, then Xis the space of all term vectors, xi is the ithterm vector corresponding to some documents, and Xis a particular learning sample. In general, if two domains are different, then they may have different feature spaces ordifferent marginal probability distributions. Given a speci fic domain, D={X,P(X)},atask consists of two components: a label space Yand an objective predictive function f(·)(denoted by T={Y,f(·)}), which is not observed but can be learned from the training data, whichconsist of pairs {x i,yi}, where xi∈Xandyi∈Y .T h e function f(·)can be used to predict the corresponding label, f(x), of a new instance x. From a probabilistic viewpoint, f(x)can be written as P(y|x). In our document classi fication example, Yis the set of all labels, which is True, False for a binary classi fication task, and yiis “True” or “False”. For simplicity, in this survey, we only consider the case where there is one source domain DS, and one target domain, DT, as this is by far the most popular of the research works in the literature. More speci fically, we denote the source domain data asDS={(xS1,yS1),...,(xSnS,ySnS)}, where xSi∈ XSi st h ed a t ai n s t a n c ea n d ySi∈Y Sis the corresponding class label. In our document classi fication example, DScan be a set of term vectors together with their associated true orfalse class labels. Similarly, we denote the target domain dataasD T={(xT1,yT1),...,(xTnT,yTnT)}, where the input xTi is inXTandyTi∈YTis the corresponding output. In most cases, 0≤nT/lessmuchnS. We now give a uni fied de finition of transfer learning. Definition 1 (Transfer Learning) Given a source domain DS and learning task TS, a target domain DTand learning task TT,transfer learning aims to help improve the learning of the target predictive function fT(·)inDTusing the knowledge in DSandTS, where DS/negationslash=DT,o rTS/negationslash=TT. In the above de finition, a domain is a pair D={X,P(X)}. Thus the condition DS/negationslash=DTimplies that either XS/negationslash=XTor PS(X)/negationslash=PT(X). For example, in our document classi fication example, this means that between a source document set anda target document set, either the term features are differentbetween the two sets (e.g., they use different languages), ortheir marginal distributions are different. Similarly, a task is de fined as a pair T={Y,P(Y|X)}. Thus the condition T S/negationslash=TTimplies that either YS/negationslash=YT orP(YS|XS)/negationslash=P(YT|XT). When the target and source domains are the same, i.e. DS=DT, and their learning tasks are the same, i.e., TS=TT, the learning problem becomes a traditional machine learning problem. When the domains are different, then either (1) the feature spaces between thedomains are different, i.e. X S/negationslash=XT, or (2) the feature spaces between the domains are the same but the marginalprobability distributions between domain data are different;i.e.P(X S)/negationslash=P(XT), where XSi∈X SandXTi∈X T. As an example, in our document classi fication example, case (1) corresponds to when the two sets of documents aredescribed in different languages, and case (2) may correspondto when the source domain documents and the target domaindocuments focus on different topics.Given speci fic domains D SandDT, when the learning tasksTSandTTare different, then either (1) the label spaces between the domains are different, i.e. YS/negationslash=YT, or (2) the conditional probability distributions between the domains are different; i.e. P(YS|XS)/negationslash=P(YT|XT), where YSi∈YSand YTi∈YT. In our document classi fication example, case (1) corresponds to the situation where source domain has binarydocument classes, whereas the target domain has ten classes toclassify the documents to. Case (2) corresponds to the situationwhere the source and target documents are very unbalanced in terms of the user-de fined classes. In addition, when there exists some relationship, explicit or implicit, between the feature spaces of the two domains, wesay that the source and target domains are related . 2.3 A Categorization of Transfer Learning Tech- niques Intransfer learning , we have the following three main research issues: (1) What to transfer; (2) How to transfer; (3) When totransfer. “What to transfer” asks which part of knowledge can be transferred across domains or tasks. Some knowledge isspecific for individual domains or tasks, and some knowledge may be common between different domains such that they mayhelp improve performance for the target domain or task. Afterdiscovering which knowledge can be transferred, learningalgorithms need to be developed to transfer the knowledge,which corresponds to the “how to transfer” issue. “When to transfer” asks in which situations, transferring skills should be done. Likewise, we are interested in knowing in which situations, knowledge should notbe transferred. In some situations, when the source domain and target domain arenot related to each other, brute-force transfer may be unsuc-cessful. In the worst case, it may even hurt the performanceof learning in the target domain, a situation which is oftenreferred to as negative transfer . Most current work on transfer learning focuses on “What to transfer” and “How to transfer”, by implicitly assuming that the source and target domains berelated to each other. However, how to avoid negative transferis an important open issue that is attracting more and moreattention in the future. Based on the de finition of transfer learning, we summarize the relationship between traditional machine learning and var-ious transfer learning settings in Table 1, where we categorizetransfer learning under three sub-settings, inductive trans- fer learning ,transductive transfer learning andunsupervised transfer learning , based on different situations between the source and target domains and tasks. 1) In the inductive transfer learning setting, the target task is different from the source task, no matter when the source and target domains are the same or not.In this case, some labeled data in the target domain arerequired to induce an objective predictive model f T(·) for use in the target domain. In addition, according todifferent situations of labeled and unlabeled data in the source domain, we can further categorize the inductive transfer learning setting into two cases:IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 4 TABLE 1 Relationship between Traditional Machine Learning and Various Transfer Learning Settings Learning Settings Source and Target Domains Source and Target Tasks Traditional Machine Learning the same the same Inductive Transfer Learning / the same different but related Transfer Learning Unsupervised Transfer Learning different but related different but related Transductive Transfer Learning different but related the same (1.1) A lot of labeled data in the source domain are available. In this case, the inductive transfer learning setting is similar to the multi-task learning setting.However, the inductive transfer learning setting only aims at achieving high performance in the target taskby transferring knowledge from the source task whilemulti-task learning tries to learn the target and sourcetask simultaneously.(1.2) No labeled data in the source domain are available.In this case, the inductive transfer learning setting is similar to the self-taught learning setting, which is first proposed by Raina et al. [22]. In the self-taught learning setting, the label spaces between the source and target domains may be different, which impliesthe side information of the source domain cannot be used directly. Thus, it’s similar to the inductive transferlearning setting where the labeled data in the sourcedomain are unavailable. 2) In the transductive transfer learning setting, the source and target tasks are the same, while the source and target domains are different.In this situation, no labeled data in the target domainare available while a lot of labeled data in the sourcedomain are available. In addition, according to differentsituations between the source and target domains, we can further categorize the transductive transfer learning setting into two cases.(2.1) The feature spaces between the source and targetdomains are different, X S/negationslash=XT. (2.2) The feature spaces between domains are the same,X S=XT, but the marginal probability distributions of the input data are different, P(XS)/negationslash=P(XT). The latter case of the transductive transfer learning setting is related to domain adaptation for knowledgetransfer in text classi fication [23] and sample selection bias [24] or co-variate shift [25], whose assumptions aresimilar. 3) Finally, in the unsupervised transfer learning setting, similar to inductive transfer learning setting, the target task is different from but related to the source task.However, the unsupervised transfer learning focus on solving unsupervised learning tasks in the target domain, such as clustering, dimensionality reduction and density estimation [26], [27]. In this case, there are no labeleddata available in both source and target domains intraining. The relationship between the different settings of transfer learning and the related areas are summarized in Table 2 andFigure 2.Approaches to transfer learning in the above three different settings can be summarized into four cases based on “What totransfer”. Table 3 shows these four cases and brief description.Thefirst context can be referred to as instance-based transfer- learning (or instance-transfer) approach [6], [28], [29], [30],[31], [24], [32], [33], [34], [35], which assumes that certainparts of the data in the source domain can be reused forlearning in the target domain by re-weighting . Instance re- weighting and importance sampling are two major techniquesin this context. A second case can be referred to as feature-representation- transfer approach [22], [36], [37], [38], [39], [8], [40], [41], [42], [43], [44]. The intuitive idea behind this case is to learna “good” feature representation for the target domain. In thiscase, the knowledge used to transfer across domains is encoded into the learned feature representation. With the new featurerepresentation, the performance of the target task is expectedto improve signi ficantly. A third case can be referred to as parameter-transfer ap- proach [45], [46], [47], [48], [49], which assumes that thesource tasks and the target tasks share some parameters orprior distributions of the hyper-parameters of the models. Thetransferred knowledge is encoded into the shared parametersor priors. Thus, by discovering the shared parameters or priors,knowledge can be transferred across tasks. Finally, the last case can be referred to as the relational- knowledge-transfer problem [50], which deals with transferlearning for relational domains. The basic assumption behindthis context is that some relationship among the data in thesource and target domains are similar. Thus, the knowledgeto be transferred is the relationship among the data. Recently,statistical relational learning techniques dominate this context[51], [52]. Table 4 shows the cases where the different approaches are used for each transfer learning setting. We can see thattheinductive transfer learning setting has been studied in many research works, while the unsupervised transfer learning setting is a relatively new research topic and only studiedin the context of the feature-representation-transfer case. In addition, the feature-representation-transfer problem has been proposed to all three settings of transfer learning. However,theparameter-transfer and the relational-knowledge-transfer approach are only studied in the inductive transfer learning setting, which we discuss in detail below. 3I NDUCTIVE TRANSFER LEARNING Definition 2 (Inductive Transfer Learning) Given a source domain DSand a learning task TS, a target domain DTandIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 5 TABLE 2 Different Settings of Transfer Learning Transfer Learning Settings Related Areas Source Domain Labels Target Domain Labels Tasks Inductive Transfer Learning Multi-task Learning Available Available Regression, Classi fication Self-taught Learning Unavailable Available Regression, Classi fication Transductive Transfer Learning Domain Adaptation, Sample Selection Bias, Co-variate ShiftAvailable Unavailable Regression,Classi fication Unsupervised Transfer Learning Unavailable Unavailable Clustering, Dimensionality Reduction Fig. 2. An Overview of Different Settings of Transfer TABLE 3 Different Approaches to Transfer Learning Transfer Learning Approaches Brief Description Instance-transfer To re-weight some labeled data in the source domain for use in the target domain [6], [28], [29], [30], [31], [24], [32], [33], [34], [35]. Feature-representation-transfer Find a “good” feature representation that reduces difference between the source and the targetdomains and the error of classi fication and regression models [22], [36], [37], [38], [39], [8], [40], [41], [42], [43], [44]. Parameter-transfer Discover shared parameters or priors between the source domain and target domain models, which can bene fit for transfer learning [45], [46], [47], [48], [49]. Relational-knowledge-transfer Build mapping of relational knowledge between the source domain and the target domains. Both domains are relational domains and i.i.d assumption is relaxed in each domain [50], [51], [52]. TABLE 4 Different Approaches Used in Different Settings Inductive Transfer Learning Transductive Transfer Learning Unsupervised Transfer Learning Instance-transfer√ √ Feature-representation-transfer√ √ √ Parameter-transfer√ Relational-knowledge-transfer√IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 6 a learning task TT,inductive transfer learning aims to help improve the learning of the target predictive function fT(·)in DTusing the knowledge in DSandTS, where TS/negationslash=TT. Based on the above de finition of the inductive transfer learning setting, a few labeled data in the target domain are required as the training data to induce the target predictive function. As mentioned in Section 2.3, this setting has twocases: (1) Labeled data in the source domain are available;(2) Labeled data in the source domain are unavailable whileunlabeled data in the source domain are available. Mosttransfer learning approaches in this setting focus on the former case. 3.1 Transferring Knowledge of Instances Theinstance-transfer approach to the inductive transfer learn- ing setting is intuitively appealing: although the source domaindata cannot be reused directly, there are certain parts of thedata that can still be reused together with a few labeled datain the target domain. Daiet al. [6] proposed a boosting algorithm, TrAdaBoost , which is an extension of the AdaBoost algorithm, to address theinductive transfer learning problems. TrAdaBoost assumes that the source and target domain data use exactly the same setof features and labels, but the distributions of the data in thetwo domains are different. In addition, TrAdaBoost assumes that, due to the difference in distributions between the source and the target domains, some of the source domain data maybe useful in learning for the target domain but some of themmay not and could even be harmful. It attempts to iterativelyre-weight the source domain data to reduce the effect of the“bad” source data while encourage the “good” source data to contribute more for the target domain. For each round ofiteration, TrAdaBoost trains the base classi fier on the weighted source and target data. The error is only calculated on thetarget data. Furthermore, TrAdaBoost uses the same strategy as AdaBoost to update the incorrectly classi fied examples in the target domain while using a different strategy from AdaBoost to update the incorrectly classi fied source examples in the source domain. Theoretical analysis of TrAdaBoost in also given in [6]. Jiang and Zhai [30] proposed a heuristic method to remove “misleading” training examples from the source domain basedon the difference between conditional probabilities P(y T|xT) andP(yS|xS). Liao et al. [31] proposed a new active learning method to select the unlabeled data in a target domain tobe labeled with the help of the source domain data. Wu andDietterich [53] integrated the source domain (auxiliary) data an SVM framework for improving the classi fication performance. 3.2 Transferring Knowledge of Feature Representa- tions The feature-representation-transfer approach to the inductive transfer learning problem aims at finding “good” feature representations to minimize domain divergence and classi fi- cation or regression model error. Strategies to find “good” feature representations are different for different types of thesource domain data. If a lot of labeled data in the source domain are available, supervised learning methods can beused to construct a feature representation. This is similar tocommon feature learning in the field of multi-task learning [40]. If no labeled data in the source domain are available,unsupervised learning methods are proposed to construct thefeature representation. 3.2.1 Supervised Feature Construction Supervised feature construction methods for the inductive transfer learning setting are similar to those used in multi- task learning. The basic idea is to learn a low-dimensionalrepresentation that is shared across related tasks. In addition,the learned new representation can reduce the classi fication or regression model error of each task as well. Argyriou et al.[40] proposed a sparse feature learning method for multi- task learning. In the inductive transfer learning setting, the common features can be learned by solving an optimizationproblem, given as follows. arg min A,U/summationdisplay t∈{T,S}nt/summationdisplay i=1L(yti,/angbracketleftat,UTxti/angbracketright)+γ/bardblA/bardbl2 2,1(1) s.t. U ∈Od In this equation, SandTdenote the tasks in the source domain and target domain, respectively. A=[aS,aT]∈Rd×2 is a matrix of parameters. Uis ad×dorthogonal matrix (mapping function) for mapping the original high-dimensionaldata to low-dimensional representations. The (r, p)-norm of Ais defined as /bardblA/bardbl r,p:= (/summationtextd i=1/bardblai/bardblp r)1 p. The optimization problem (1) estimates the low-dimensional representationsU TXT,UTXSand the parameters, A, of the model at the same time. The optimization problem (1) can be further trans-formed into an equivalent convex optimization formulation andbe solved ef ficiently. In a follow-up work, Argyriou et al. [41] proposed a spectral regularization framework on matrices for multi-task structure learning. Leeet al. [42] proposed a convex optimization algorithm for simultaneously learning meta-priors and feature weights froman ensemble of related prediction tasks. The meta-priors canbe transferred among different tasks. Jebara [43] proposed to select features for multi-task learning with SVMs. Ruckert et al.[54] designed a kernel-based approach to inductive transfer, which aims at finding a suitable kernel for the target data. 3.2.2 Unsupervised Feature Construction In [22], Raina et al. proposed to apply sparse coding [55], which is an unsupervised feature construction method, forlearning higher level features for transfer learning. The basic idea of this approach consists of two steps. In the first step, higher-level basis vectors b={b 1,b2,...,b s}are learned on the source domain data by solving the optimization problem(2) as shown as follows, min a,b/summationdisplay i/bardblxSi−/summationtext jaj Sibj/bardbl2 2+β/bardblaSi/bardbl1 (2) s.t. /bardblbj/bardbl2≤1,∀j∈1,...,sIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 7 In this equation, aj Siis a new representation of basis bjfor inputxSiandβis a coef ficient to balance the feature con- struction term and the regularization term. After learning thebasis vectors b, in the second step, an optimization algorithm (3) is applied on the target domain data to learn higher level features based on the basis vectors b. a ∗ Ti=a r gm i n aTi/bardblxTi−/summationdisplay jaj Tibj/bardbl2 2+β/bardblaTi/bardbl1 (3) Finally, discriminative algorithms can be applied to {a∗ Ti}/primes with corresponding labels to train classi fication or regression models for use in the target domain. One drawback of this method is that the so-called higher-level basis vectors learnedon the source domain in the optimization problem (2) may notbe suitable for use in the target domain. Recently, manifold learning methods have been adapted for transfer learning. In [44], Wang and Mahadevan proposeda Procrustes analysis based approach to manifold alignmentwithout correspondences, which can be used to transfer theknowledge across domains via the aligned manifolds. 3.3 Transferring Knowledge of Parameters Most parameter-transfer approaches to the inductive transfer learning setting assume that individual models for related tasks should share some parameters or prior distributions of hyper- parameters. Most approaches described in this section, includ-ing a regularization framework and a hierarchical Bayesianframework, are designed to work under multi-task learning.However, they can be easily modi fied for transfer learning. As mentioned above, multi-task learning tries to learn both the source and target tasks simultaneously and perfectly, while transfer learning only aims at boosting the performance ofthe target domain by utilizing the source domain data. Thus,in multi-task learning, weights of the loss functions for thesource and target data are the same. In contrast, in transferlearning, weights in the loss functions for different domains can be different. Intuitively, we may assign a larger weight to the loss function of the target domain to make sure that wecan achieve better performance in the target domain. Lawrence and Platt [45] proposed an ef ficient algorithm known as MT-IVM, which is based on Gaussian Processes(GP), to handle the multi-task learning case. MT-IVM tries tolearn parameters of a Gaussian Process over multiple tasks bysharing the same GP prior. Bonilla et al. [46] also investigated multi-task learning in the context of GP. The authors proposedto use a free-form covariance matrix over tasks to modelinter-task dependencies, where a GP prior is used to inducecorrelations between tasks. Schwaighofer et al. [47] proposed to use a hierarchical Bayesian framework (HB) together withGP for multi-task learning. Besides transferring the priors of the GP models, some researchers also proposed to transfer parameters of SVMs under a regularization framework. Evgeniou and Pontil [48] borrowed the idea of HB to SVMs for multi-task learning.The proposed method assumed that the parameter, w, in SVMs for each task can be separated into two terms. One is acommon term over tasks and the other is a task-speci fict e r m .Ininductive transfer learning , w S=w0+vSand w T=w0+vT, where, wSandwTare parameters of the SVMs for the source task and the target learning task, respectively. w0is a common parameter while vSandvTare speci fic parameters for the source task and the target task, respectively. By assuming ft= wt·xto be a hyper-plane for task t, an extension of SVMs to multi-task learning case can be written as the following: min w0,vt,ξtiJ(w0,vt,ξti) (4) =/summationdisplay t∈{S,T}nt/summationdisplay i=1ξti+λ1 2/summationdisplay t∈{S,T}/bardblvt/bardbl2+λ2/bardblw0/bardbl2 s.t. y ti(w0+vt)·xti≥1−ξti, ξti≥0,i∈{1,2,...,n t}and t ∈{S,T}. By solving the optimization problem above, we can learn the parameters w0,vSandvTsimultaneously. Several researchers have pursued the parameter transfer approach further. Gao et al. [49] proposed a locally weighted ensemble learning framework to combine multiple models fortransfer learning, where the weights are dynamically assignedaccording to a model’s predictive power on each test example in the target domain. 3.4 Transferring Relational Knowledge Different from other three contexts, the relational-knowledge- transfer approach deals with transfer learning problems inrelational domains, where the data are non-i.i.d. and can be represented by multiple relations, such as networked data and social network data. This approach does not assume that thedata drawn from each domain be independent and identicallydistributed (i.i.d.) as traditionally assumed. It tries to transfertherelationship among data from a source domain to a target domain. In this context, statistical relational learning techniques are proposed to solve these problems. Mihalkova et al. [50] proposed an algorithm TAMAR that transfers relational knowledge with Markov Logic Networks(MLNs) across relational domains. MLNs [56] is a powerfulformalism, which combines the compact expressiveness of first order logic with flexibility of probability, for statistical relational learning. In MLNs, entities in a relational domainare represented by predicates and their relationships are rep-resented in first-order logic. TAMAR is motivated by the fact that if two domains are related to each other, there may existmappings to connect entities and their relationships from asource domain to a target domain. For example, a professorcan be considered as playing a similar role in an academicdomain as a manager in an industrial management domain.In addition, the relationship between a professor and his orher students is similar to the relationship between a managerand his or her workers. Thus, there may exist a mappingfrom professor to manager and a mapping from the professor-student relationship to the manager-worker relationship. In thisvein, TAMAR tries to use an MLN learned for a source domain to aid in the learning of an MLN for a target domain. Basically,IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 8 TAMAR is a two-stage algorithm. In the first step, a mapping is constructed from a source MLN to the target domain based on weighted pseudo loglikelihood measure (WPLL). In thesecond step, a revision is done for the mapped structure in thetarget domain through the FORTE algorithm [57], which is an inductive logic programming (ILP) algorithm for revising first order theories. The revised MLN can be used as a relational model for inference or reasoning in the target domain. In the AAAI-2008 workshop on transfer learning for com- plex tasks 4,M i h a l k o v a et al. [51] extended TAMAR to the single-entity-centered setting of transfer learning, where onlyone entity in a target domain is available. Davis et al. [52] proposed an approach to transferring relational knowledgebased on a form of second-order Markov logic. The basic idea of the algorithm is to discover structural regularities inthe source domain in the form of Markov logic formulaswith predicate variables, by instantiating these formulas with predicates from the target domain. 4T RANSDUCTIVE TRANSFER LEARNING The term transductive transfer learning wasfirst proposed by Arnold et al. [58], where they required that the source and target tasks be the same, although the domains may be different. On top of these conditions, they further requiredthat that all unlabeled data in the target domain are available at training time, but we believe that this condition can berelaxed; instead, in our de finition of the transductive transfer learning setting, we only require that part of the unlabeled target data be seen at training time in order to obtain the marginal probability for the target data. Note that the word ’transductive’ is used with several mean- ings. In the traditional machine learning setting, transductive learning [59] refers to the situation where all test data are required to be seen at training time, and that the learned modelcannot be reused for future data. Thus, when some new testdata arrive, they must be classi fied together with all existing data. In our categorization of transfer learning, in contrast, weuse the term transductive to emphasize the concept that in this type of transfer learning, the tasks must be the same and there must be some unlabeled data available in the target domain. Definition 3 (Transductive Transfer Learning) Given a source domain D Sand a corresponding learning task TS,at a r g e t domain DTand a corresponding learning task TT,transductive transfer learning aims to improve the learning of the target predictive function fT(·)inDTusing the knowledge in DS andTS, where DS/negationslash=DTandTS=TT. In addition, some unlabeled target domain data must be available at training time. This de finition covers the work of Arnold et al. [58], since the latter considered domain adaptation , where the difference lies between the marginal probability distributions of sourceand target data; i.e., the tasks are the same but the domainsare different. Similar to the traditional transductive learning setting, which aims to make the best use of the unlabeled test data for learn-ing, in our classi fication scheme under transductive transfer 4. http://www.cs.utexas.edu/ ∼mtaylor/AAAI08TL/learning, we also assume that some target-domain unlabeled data be given. In the above de finition of transductive transfer learning, the source and target tasks are the same, whichimplies that one can adapt the predictive function learnedin the source domain for use in the target domain through some unlabeled target-domain data. As mentioned in Section 2.3, this setting can be split to two cases: (a) The featurespaces between the source and target domains are different,X S/negationslash=XT, and (b) the feature spaces between domains are the same, XS=XT, but the marginal probability distributions of the input data are different, P(XS)/negationslash=P(XT). This is similar to the requirements in domain adaptation and sample selectionbias. Most approaches described in the following sections are related to case (b) above. 4.1 Transferring the Knowledge of Instances Most instance-transfer approaches to the transductive transfer learning setting are motivated by importance sampling. To see how importance sampling based methods may help in this setting, we first review the problem of empirical risk minimization (ERM) [60]. In general, we might want to learnthe optimal parameters θ ∗of the model by minimizing the expected risk, θ∗=a r gm i n θ∈ΘE(x,y)∈P[l(x, y, θ)], where l(x, y, θ)is a loss function that depends on the para- meter θ. However, since it is hard to estimate the probability distribution P, we choose to minimize the ERM instead, θ∗=a r gm i n θ∈Θ1 nn/summationdisplay i=1[l(xi,yi,θ)], where nis size of the training data. In the transductive transfer learning setting, we want to learn an optimal model for the target domain by minimizingthe expected risk, θ ∗=a r gm i n θ∈Θ/summationdisplay (x,y)∈DTP(DT)l(x, y, θ). However, since no labeled data in the target domain are observed in training data, we have to learn a model from thesource domain data instead. If P(D S)=P(DT),t h e nw em a y simply learn the model by solving the following optimization problem for use in the target domain, θ∗=a r gm i n θ∈Θ/summationdisplay (x,y)∈DSP(DS)l(x, y, θ). Otherwise, when P(DS)/negationslash=P(DT), we need to modify the above optimization problem to learn a model with highgeneralization ability for the target domain, as follows: θ ∗=a r g m i n θ∈Θ/summationdisplay (x,y)∈DSP(DT) P(DS)P(DS)l(x, y, θ) ≈arg min θ∈ΘnS/summationdisplay i=1PT(xTi,yTi) PS(xSi,ySi)l(xSi,ySi,θ). (5) Therefore, by adding different penalty values to each instance (xSi,ySi)with the corresponding weightPT(xTi,yTi) PS(xSi,ySi), we canIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 9 learn a precise model for the target domain. Furthermore, sinceP(YT|XT)=P(YS|XS). Thus the difference between P(DS)andP(DT)is caused by P(XS)andP(XT)and PT(xTi,yTi) PS(xSi,ySi)=P(xSi) P(xTi). If we can estimateP(xSi) P(xTi)for each instance, we can solve the transductive transfer learning problems. There exist various ways to estimateP(xSi) P(xTi). Zadrozny [24] proposed to estimate the terms P(xSi)andP(xTi) independently by constructing simple classi fication problems. Fanet al. [35] further analyzed the problems by using various classi fiers to estimate the probability ratio. Huang et al. [32] proposed a kernel-mean matching (KMM) algorithm to learn P(xSi) P(xTi)directly by matching the means between the source domain data and the target domain data in a reproducing- kernel Hilbert space (RKHS). KMM can be rewritten as thefollowing quadratic programming (QP) optimization problem. min β1 2βTKβ−κTβ (6) s.t. β i∈[0,B]and|/summationtextnS i=1βi−nS|≤nS/epsilon1 where K=/bracketleftbigg KS,SKS,T KT, SKT, T/bracketrightbigg andKij=k(xi,xj).KS,Sand KT,Tare kernel matrices for the source domain data and the target domain data, respectively. κi=nS nT/summationtextnT j=1k(xi,xTj), where xi∈XS/uniontextXT, while xTj∈XT. It can be proved that βi=P(xSi) P(xTi)[32]. An advantage of using KMM is that it can avoid performing density estimation of either P(xSi)orP(xTi), which is dif ficult when the size of the data set is small. Sugiyama et al. [34] proposed an algorithm known as Kullback-Leibler Importance Estimation Procedure (KLIEP) to estimateP(xSi) P(xTi)directly, based on the minimization of the Kullback-Leibler divergence. KLIEP can be integrated with cross-validation to perform model selec-tion automatically in two steps: (1) estimating the weightsof the source domain data; (2) training models on the re-weighted data. Bickel et al. [33] combined the two steps in a uni fied framework by deriving a kernel-logistic regression classi fier. Besides sample re-weighting techniques, Dai et al.[28] extended a traditional Naive Bayesian classi fier for the transductive transfer learning problems. For more informationon importance sampling and re-weighting methods for co-variate shift or sample selection bias, readers can refer to arecently published book [29] by Quionero-Candela et al. Onecan also consult a tutorial on Sample Selection Bias by Fanand Sugiyama in ICDM-08 5. 4.2 Transferring Knowledge of Feature Representa- tions Most feature-representation transfer approaches to the trans- ductive transfer learning setting are under unsupervised learn- ing frameworks. Blitzer et al. [38] proposed a structural cor- respondence learning (SCL) algorithm, which extends [37],to make use of the unlabeled data from the target domain toextract some revelent features that may reduce the difference 5. Tutorial slides can be found at http://www.cs.columbia.edu/ ∼ fan/PPT/ICDM08SampleBias.pptbetween the domains. The first step of SCL is to de fine a set ofpivot features6(the number of pivot feature is denoted bym) on the unlabeled data from both domains. Then, SCL removes these pivot features from the data and treats each pivot feature as a new label vector. The mclassi fication problems can be constructed. By assuming each problem can be solvedby linear classi fier, which is shown as follows, f l(x)=sgn(wT l·x),l=1,...,m SCL can learn a matrix W=[w1w2...w m]of parameters. In the third step, singular value decomposition (SVD) is appliedto matrix W=[w 1w2...w m].L e t W=UDVT, then θ=UT [1:h,:](his the number of the shared features) is the matrix (linear mapping) whose rows are the top left singularvectors of W. Finally, standard discriminative algorithms can be applied to the augmented feature vector to build models.The augmented feature vector contains all the original featurex iappended with the new shared features θxi. As mentioned in [38], if the pivot features are well designed, then the learned mapping θencodes the correspondence between the features from the different domains. Although Ben-David andSchuller [61] showed experimentally that SCL can reduce thedifference between domains, how to select the pivot features is difficult and domain-dependent. In [38], Blitzer et al. used a heuristic method to select pivot features for natural languageprocessing (NLP) problems, such as tagging of sentences. Intheir follow-up work, the researchers proposed to use MutualInformation (MI) to choose the pivot features instead of usingmore heuristic criteria [8]. MI-SCL tries to find some pivot features that have high dependence on the labels in the source domain. Transfer learning in the NLP domain is sometimes re- ferred to as domain adaptation. In this area, Daum ´e [39] proposed a kernel-mapping function for NLP problems, which maps the data from both source and target domains to ahigh-dimensional feature space, where standard discriminativelearning methods are used to train the classi fiers. However, the constructed kernel mapping function is domain knowledgedriven. It is not easy to generalize the kernel mapping to otherareas or applications. Blitzer et al. [62] analyzed the uniform convergence bounds for algorithms that minimized a convex combination of source and target empirical risks. In [36], Dai et al. proposed a co-clustering based algorithm to propagate the label information across different domains. In [63], Xing et al. proposed a novel algorithm known as bridged re finement to correct the labels predicted by a shift- unaware classi fier towards a target distribution and take the mixture distribution of the training and test data as a bridge tobetter transfer from the training data to the test data. In [64], Ling et al. proposed a spectral classi fication framework for cross-domain transfer learning problem, where the objectivefunction is introduced to seek consistency between the in-domain supervision and the out-of-domain intrinsic structure.In [65], Xue et al. proposed a cross-domain text classi fication algorithm that extended the traditional probabilistic latentsemantic analysis (PLSA) algorithm to integrate labeled and 6. The pivot features are domain speci fic and depend on prior knowledgeIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 10 unlabeled data from different but related domains, into a unified probabilistic model. The new model is called Topic- bridged PLSA, or TPLSA. Transfer learning via dimensionality reduction was re- cently proposed by Pan et al. [66]. In this work, Pan et al.exploited the Maximum Mean Discrepancy Embedding (MMDE) method, originally designed for dimensionality re-duction, to learn a low dimensional space to reduce thedifference of distributions between different domains for trans-ductive transfer learning. However, MMDE may suffer fromits computational burden. Thus, in [67], Pan et al. further proposed an ef ficient feature extraction algorithm, known as Transfer Component Analysis (TCA) to overcome the draw-back of MMDE. 5U NSUPERVISED TRANSFER LEARNING Definition 4 (Unsupervised Transfer Learning) Given a source domain DSwith a learning task TS, a target domain DT and a corresponding learning task TT,unsupervised transfer learning aims to help improve the learning of the target predictive function fT(·)7inDTusing the knowledge in DS andTS, where TS/negationslash=TTandYSandYTare not observable. Based on the de finition of the unsupervised transfer learn- ingsetting, no labeled data are observed in the source and target domains in training. So far, there is little research workon this setting. Recently, Self-taught clustering (STC) [26] and transferred discriminative analysis (TDA) [27] algorithmsare proposed to transfer clustering and transfer dimensionalityreduction problems, respectively. 5.1 Transferring Knowledge of Feature Representa- tions Dai et al. [26] studied a new case of clustering problems, known as self-taught clustering .Self-taught clustering is an instance of unsupervised transfer learning , which aims at clustering a small collection of unlabeled data in the targetdomain with the help of a large amount of unlabeled data inthe source domain. STC tries to learn a common feature spaceacross domains, which helps in clustering in the target domain.The objective function of STC is shown as follows. J(˜X T,˜XS,˜Z) (7) =I(XT,Z)−I(˜XT,˜Z)+λ/bracketleftBig I(XS,Z)−I(˜XS,˜Z)/bracketrightBig where XSandXTare the source and target domain data, respectively. Zis a shared feature space by XSandXT, andI(·,·)is the mutual information between two random variables. Suppose that there exist three clustering functionsC XT:XT→˜XT,CXS:XS→˜XSandCZ:Z→˜Z, where ˜XT,˜XSand˜Zare corresponding clusters of XT,XSandZ, respectively. The goal of STC is to learn ˜XTby solving the optimization problem (7): arg min ˜XT,˜XS,˜ZJ(˜XT,˜XS,˜Z) (8) 7. In unsupervised transfer learning, the predicted labels are latent variables, such as clusters or reduced dimensionsAn iterative algorithm for solving the optimization function (8) was given in [26]. Similarly, [27] proposed a transferred discriminative analy- sis(TDA) algorithm to solve the transfer dimensionality reduction problem. TDA first applies clustering methods to generate pseudo-class labels for the target unlabeled data. Itthen applies dimensionality reduction methods to the targetdata and labeled source data to reduce the dimensions. Thesetwo steps run iteratively to find the best subspace for the target data. 6T RANSFER BOUNDS AND NEGATIVE TRANSFER An important issue is to recognize the limit of the power oftransfer learning. In [68], Hassan Mahmud and Ray analyzedthe case of transfer learning using Kolmogorov complexity,where some theoretical bounds are proved. In particular, theauthors used conditional Kolmogorov complexity to measurerelatedness between tasks and transfer the “right” amountof information in a sequential transfer learning task under aBayesian framework. Recently, Eaton et al. [69] proposed a novel graph-based method for knowledge transfer, where the relationships be-tween source tasks are modeled by embedding the set oflearned source models in a graph using transferability as themetric. Transferring to a new task proceeds by mapping theproblem into the graph and then learning a function on thisgraph that automatically determines the parameters to transferto the new learning task. Negative transfer happens when the source domain data and task contribute to the reduced performance of learning in thetarget domain. Despite the fact that how to avoid negativetransfer is a very important issue, little research work hasbeen published on this topic. Rosenstein et al. [70] empirically showed that if two tasks are too dissimilar, then brute-forcetransfer may hurt the performance of the target task. Someworks have been exploited to analyze relatedness among tasks and task clustering techniques, such as [71], [72], whichmay help provide guidance on how to avoid negative transferautomatically. Bakker and Heskes [72] adopted a Bayesian approach in which some of the model parameters are shared for all tasks and others more loosely connected through a jointprior distribution that can be learned from the data. Thus,the data are clustered based on the task parameters, wheretasks in the same cluster are supposed to be related to each others. Argyriou et al. [73] considered situations in which the learning tasks can be divided into groups. Tasks within eachgroup are related by sharing a low-dimensional representation,which differs among different groups. As a result, tasks withina group can find it easier to transfer useful knowledge. 7A PPLICATIONS OF TRANSFER LEARNING Recently, transfer learning techniques have been applied suc- cessfully in many real-world applications. Raina et al. [74] and Daiet al. [36], [28] proposed to use transfer learning tech- niques to learn text data across domains, respectively. Blitzeret al. [38] proposed to use SCL for solving NLP problems. AnIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 11 extension of SCL was proposed in [8] for solving sentiment classi fication problems. Wu and Dietterich [53] proposed to use both inadequate target domain data and plenty of lowquality source domain data for image classi fication problems. Arnold et al. [58] proposed to use transductive transfer learn- ingmethods to solve name-entity recognition problems. In [75], [76], [77], [78], [79], transfer learning techniques are proposed to extract knowledge from WiFi localization models across time periods, space and mobile devices, to bene fitW i F i localization tasks in other settings. Zhuo et al. [80] studied how to transfer domain knowledge to learn relational action modelsacross domains in automated planning. In [81], Raykar et al. proposed a novel Bayesian multiple- instance learning algorithm, which can automatically identifythe relevant feature subset and use inductive transfer forlearning multiple, but conceptually related, classi fiers, for computer aided design (CAD). In [82], Ling et al. proposed an information-theoretic approach for transfer learning to addressthecross-language classi fication problem for translating Web pages from English to Chinese. The approach addressed theproblem when there are plenty of labeled English text data whereas there are only a small number of labeled Chinese text documents. Transfer learning across the two feature spaces areachieved by designing a suitable mapping function as a bridge. So far, there are at least two international competitions based on transfer learning, which made available some much neededpublic data. In the ECML/PKDD-2006 discovery challenge 8, the task was to handle personalized spam filtering and generalization across related learning tasks. For training a spam- filtering system, we need to collect a lot of emails from a group of users with corresponding labels: spam ornot spam , and train a classi fier based on these data. For a new email user, we might want to adapt the learned model for the user. Thechallenge is that the distributions of emails for the first set of users and the new user are different. Thus, this problem can be modeled as an inductive transfer learning problem, which aims to adapt an old spam- filtering model to a new situation with fewer training data and less training time. A second data set was made available through the ICDM- 2007 Contest, in which a task was to estimate a WiFi client’sindoor locations using the WiFi signal data obtained overdifferent periods of time [83]. Since the values of WiFisignal strength may be a function of time, space and devices, distributions of WiFi data over different time periods may be very different. Thus, transfer learning must be designed toreduce the data re-labeling effort. Data Sets for Transfer Learning: So far, several data sets have been published for transfer learning research. Wedenote the text mining data sets, Email spam- filtering data set, the WiFi localization over time periods data set and theSentiment classi fication data set by Text,Email ,WiFi and Sen, respectively. Text Three data sets, 20 Newsgroups, SRAA and Reuters- 21578 9, have been preprocessed for a transfer learn- ing setting by some researchers. The data in these 8.http://www.ecmlpkdd2006.org/challenge.html 9.http://apex.sjtu.edu.cn/apex wiki/dwyakdata sets are categorized to a hierarchical structure. Data from different sub-categories under the same parent category are considered to be from differentbut related domains. The task is to predict the labelsof the parent category. Email This data set is provided by the 2006 ECML/PKDD discovery challenge. WiFi This data set is provided by the ICDM-2007 Contest 10. The data were collected inside a building for localization around 145.5×37.5m2in two different time periods. Sen This data set was first used in [8]11. This data set con- tains product reviews downloaded from Amazon.com from4product types (domains): Kitchen, Books, DVDs, and Electronics. Each domain has severalthousand reviews, but the exact number varies bydomain. Reviews contain star ratings (1 to 5 stars). Empirical Evaluation To show how much bene fit transfer learning methods can bring as compared to traditional learningmethods, researchers have used some public data sets. Weshow a list taken from some published transfer learningpapers in Table 5. In [6], [84], [49], the authors used the 20 Newsgroups data 12as one of the evaluation data sets. Due to the differences in the preprocessing steps of the algorithmsby different researchers, it is hard to compare the proposedmethods directly. Thus, we denote them by 20-Newsgroups 1, 20-Newsgroups 2and 20-Newsgroups 3, respectively, and show the comparison results between the proposed transfer learningmethods and non-transfer learning methods in the table. On the 20 Newsgroups 1data, Dai et al. [6] showed the comparison experiments between standard Support Vector Machine (SVM) and the proposed TrAdaBoost algorithm. On20 Newsgroups 2,S h i et al. [84] applied an active learning algorithm to select important instances for transfer learning(AcTraK) with TrAdaBoost and standard SVM. Gao et al. [49] evaluated their proposed locally weighted ensemble learningalgorithms, pLWE and LWE, on the 20 Newsgroups 3, com- pared to SVM and Logistic Regression (LR). In addition, in the table, we also show the comparison results on the sentiment classi fication data set reported in [8]. On this data set, SGD denotes the stochastic gradient-descent algorithm with Huber loss, SCL represents a linear predictoron the new representations learned by Structural Correspon-dence Learning algorithm, and SCL-MI is an extension of SCLby applying Mutual Information to select the pivot features forthe SCL algorithm. Finally, on the WiFi localization data set, we show the comparison results reported in [67], where the baseline is aregularized least square regression model (RLSR), which is a standard regression model, and KPCA, which represents toapply RLSR on the new representations of the data learned by Kernel Principle Component Analysis. The compared transferlearning methods include Kernel Mean Matching (KMM) andthe proposed algorithm, Transfer Component Analysis (TCA). 10.http://www.cse.ust.hk/ ∼qyang/ICDMDMC2007 11.http://www.cis.upenn.edu/ ∼mdredze/datasets/sentiment/ 12.http://people.csail.mit.edu/jrennie/20Newsgroups/IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 12 For more detail about the experimental results, the readers may refer to the reference papers showed in the table. From these comparison results, we can find that the transfer learning methods designed appropriately for real world applications canindeed improve the performance signi ficantly compared to the non-transfer learning methods. Toolboxes for Transfer Learning: Researchers at UC Berke- ley provided a MATLAB toolkit for transfer learning 13. The toolkit contains algorithms and benchmark data sets for transfer learning. In addition, it provides a standard platformfor developing and testing new algorithms for transfer learning. 7.1 Other Applications of Transfer Learning Transfer learning has found many applications in sequential machine learning as well. For example, [85] proposed a graph-based method for identifying previously encountered games,and applied this technique to automate domain mapping forvalue function transfer and speed up reinforcement learning on variants of previously played games. A new approach totransfer between entirely different feature spaces is proposedintranslated learning , which is made possible by learning a mapping function for bridging features in two entirelydifferent domains (images and text) [86]. Finally, Li et al. [87], [88] have applied transfer learning to collaborative filtering problems to solve the cold start and sparsity problems. In [87], Liet al. learned a shared rating-pattern mixture model, known as a Rating-Matrix Generative Model (RMGM), in terms ofthe latent user- and item-cluster variables. RMGM bridgesmultiple rating matrices from different domains by mappingthe users and items in each rating matrix onto the shared latent user and item spaces in order to transfer useful knowledge. In [88], they applied co-clustering algorithms on users anditems in an auxiliary rating matrix. They then constructed acluster-level rating matrix known as a codebook. By assumingthe target rating matrix (on movies) is related to the auxiliaryone (on books), the target domain can be reconstructed byexpanding the codebook, completing the knowledge transferprocess. 8C ONCLUSIONS In this survey article, we have reviewed several current trendsof transfer learning. Transfer learning is classi fied to three different settings: inductive transfer learning, transductivetransfer learning and unsupervised transfer learning. Most pre-vious works focused on the former two settings. Unsupervisedtransfer learning may attract more and more attention in the future. Furthermore, each of the approaches to transfer learning can be classi fied into four contexts based on “what to trans- fer” in learning. They include the instance-transfer approach,the feature-representation-transfer approach, the parameter-transfer approach and the relational-knowledge-transfer ap-proach, respectively. The former three contexts have an i.i.dassumption on the data while the last context deals withtransfer learning on relational data. Most of these approaches 13.http://multitask.cs.berkeley.edu/assume that the selected source domain is related to the target domain. In the future, several important research issues need to be addressed. First, how to avoid negative transfer is an openproblem. As mentioned in Section 6, many proposed transferlearning algorithms assume that the source and target domainsare related to each other in some sense. However, if the as-sumption does not hold, negative transfer may happen, whichmay cause the learner to perform worse than no transferring atall. Thus, how to make sure that no negative transfer happensis a crucial issue in transfer learning. In order to avoid negative transfer learning, we need to first study transferability between source domains or tasks and target domains or tasks. Based onsuitable transferability measures, we can then select relevantsource domains or tasks to extract knowledge from for learningthe target tasks. To de fine the transferability between domains and tasks, we also need to de fine the criteria to measure the similarity between domains or tasks. Based on the distancemeasures, we can then cluster domains or tasks, which may help measure transferability. A related issue is when an entiredomain cannot be used for transfer learning, whether we canstill transfer part of the domain for useful learning in the targetdomain. In addition, most existing transfer learning algorithms so far focused on improving generalization across different distribu-tions between source and target domains or tasks. In doing so,they assumed that the feature spaces between the source andtarget domains are the same. However, in many applications,we may wish to transfer knowledge across domains or tasks that have different feature spaces, and transfer from multiplesuch source domains. We refer to this type of transfer learning asheterogeneous transfer learning . Finally, so far transfer learning techniques have been mainly applied to small scale applications with a limited variety, suchas sensor-network-based localization, text classi fication and image classi fication problems. In the future, transfer learning techniques will be widely used to solve other challenging ap- plications, such as video classi fication, social network analysis and logical inference. Acknowledgment We thank the support of Hong Kong CERG Project 621307 and a grant from NEC China Lab. REFERENCES [1] X. Wu, V . Kumar, J. R. Quinlan, J. Ghosh, Q. Yang, H. Motoda, G. J. McLachlan, A. F. M. Ng, B. Liu, P. S. Yu, Z.-H. Zhou, M. Steinbach,D. J. Hand, and D. Steinberg, “Top 10 algorithms in data mining,” Knowl. Inf. Syst. , vol. 14, no. 1, pp. 1–37, 2008. [2] Q. Yang and X. Wu, “10 challenging problems in data mining research,” International Journal of Information Technology and Decision Making , vol. 5, no. 4, pp. 597–604, 2006. [3] G. P. C. Fung, J. X. Yu, H. Lu, and P. S. Yu, “Text classi fication without negative examples revisit,” IEEE Transactions on Knowledge and Data Engineering , vol. 18, no. 1, pp. 6–20, 2006. [4] H. Al-Mubaid and S. A. Umair, “A new text categorization technique using distributional clustering and learning logic,” IEEE Transactions on Knowledge and Data Engineering , vol. 18, no. 9, pp. 1156–1165, 2006.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 13 TABLE 5 Comparison between transfer learning and non-transfer learning methods Data Set (reference) Source v.s. Target Baselines TL Methods 20 Newsgroups 1([6]) SVM TrAdaBoost ACC (unit: %) rec v.s. talk 87.3% 92.0% rec v.s. sci 83.6% 90.3% sci v.s. talk 82.3% 87.5% 20 Newsgroups 2([84]) SVM TrAdaBoost AcTraK ACC (unit: %) rec v.s. talk 60.2% 72.3% 75.4% rec v.s. sci 59.1% 67.4% 70.6% comp v.s. talk 53.6% 74.4% 80.9% comp v.s. sci 52.7% 57.3% 78.0% comp v.s. rec 49.1% 77.2% 82.1% sci v.s. talk 57.6% 71.3% 75.1% 20 Newsgroups 3([49]) SVM LR pLWE LWE ACC (unit: %) comp v.s. sci 71.18% 73.49% 78.72% 97.44% rec v.s. talk 68.24% 72.17% 72.17% 99.23% rec v.s. sci 78.16% 78.85% 88.45% 98.23% sci v.s. talk 75.77% 79.04% 83.30% 96.92% comp v.s. rec 81.56% 83.34% 91.93% 98.16% comp v.s. talk 93.89% 91.76% 96.64% 98.90% Sentiment Classi fication ([8]) SGD SCL SCL-MI ACC (unit: %) DVD v.s. book 72.8% 76.8% 79.7% electronics v.s. book 70.7% 75.4% 75.4% kitchen v.s. book 70.9% 66.1% 68.6% book v.s. DVD 77.2% 74.0% 75.8% electronics v.s. DVD 70.6% 74.3% 76.2% kitchen v.s. DVD 72.7% 75.4% 76.9% book v.s. electronics 70.8% 77.5% 75.9% DVD v.s. electronics 73.0% 74.1% 74.1% kitchen v.s. electronics 82.7% 83.7% 86.8% book v.s. kitchen 74.5% 78.7% 78.9% DVD v.s. kitchen 74.0% 79.4% 81.4% electronics v.s. kitchen 84.0% 84.4% 85.9% WiFi Localization ([67]) RLSR PCA KMM TCA AED (unit: meter) Time Av.s. Time B 6.52 3.16 5.51 2.37 [5] K. Sarinnapakorn and M. Kubat, “Combining subclassi fiers in text cat- egorization: A dst-based solution and a case study,” IEEE Transactions on Knowledge and Data Engineering , vol. 19, no. 12, pp. 1638–1651, 2007. [6] W. Dai, Q. Yang, G. Xue, and Y . Yu, “Boosting for transfer learning,” in Proceedings of the 24th International Conference on Machine Learning , Corvalis, Oregon, USA, June 2007, pp. 193–200. [7] S. J. Pan, V . W. Zheng, Q. Yang, and D. H. Hu, “Transfer learning for wi fi-based indoor localization,” in Proceedings of the Workshop on Transfer Learning for Complex Task of the 23rd AAAI Conference on Artificial Intelligence , Chicago, Illinois, USA, July 2008. [8] J. Blitzer, M. Dredze, and F. Pereira, “Biographies, bollywood, boom- boxes and blenders: Domain adaptation for sentiment classi fication,” inProceedings of the 45th Annual Meeting of the Association of Computational Linguistics , Prague, Czech Republic, 2007, pp. 432–439. [9] J. Ramon, K. Driessens, and T. Croonenborghs, “Transfer learning in reinforcement learning problems through partial policy recycling,” in ECML ’07: Proceedings of the 18th European conference on MachineLearning . Berlin, Heidelberg: Springer-Verlag, 2007, pp. 699–707. [10] M. E. Taylor and P. Stone, “Cross-domain transfer for reinforcement learning,” in ICML ’07: Proceedings of the 24th international conference on Machine learning . New York, NY , USA: ACM, 2007, pp. 879–886. [11] X. Yin, J. Han, J. Yang, and P. S. Yu, “Ef ficient classi fication across multiple database relations: A crossmine approach,” IEEE Transactions on Knowledge and Data Engineering , vol. 18, no. 6, pp. 770–783, 2006. [12] L. I. Kuncheva and J. J. Rodr łguez, “Classi fier ensembles with a random linear oracle,” IEEE Transactions on Knowledge and Data Engineering , vol. 19, no. 4, pp. 500–508, 2007. [13] E. Baralis, S. Chiusano, and P. Garza, “A lazy approach to associative classi fication,” IEEE Transactions on Knowledge and Data Engineering , vol. 20, no. 2, pp. 156–171, 2008. [14] X. Zhu, “Semi-supervised learning literature survey,” University ofWisconsin–Madison, Tech. Rep. 1530, 2006. [15] K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell, “Text clas- sification from labeled and unlabeled documents using em,” Machine Learning , vol. 39, no. 2-3, pp. 103–134, 2000. [16] A. Blum and T. Mitchell, “Combining labeled and unlabeled data with co-training,” in Proceedings of the Eleventh Annual Conference on Computational Learning Theory , 1998, pp. 92–100. [17] T. Joachims, “Transductive inference for text classi fication using support vector machines,” in Proceedings of Sixteenth International Conference on Machine Learning , 1999, pp. 825–830. [18] X. Zhu and X. Wu, “Class noise handling for effective cost-sensitive learning by cost-guided iterative classi fication filtering,” IEEE Transac- tions on Knowledge and Data Engineering , vol. 18, no. 10, pp. 1435– 1440, 2006. [19] Q. Yang, C. Ling, X. Chai, and R. Pan, “Test-cost sensitive classi fication on data with missing values,” IEEE Transactions on Knowledge and Data Engineering , vol. 18, no. 5, pp. 626–638, 2006. [20] S. Thrun and L. Pratt, Eds., Learning to learn . Norwell, MA, USA: Kluwer Academic Publishers, 1998. [21] R. Caruana, “Multitask learning,” Machine Learning , vol. 28(1), pp. 41– 75, 1997. [22] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y . Ng, “Self-taught learning: Transfer learning from unlabeled data,” in Proceedings of the 24th International Conference on Machine Learning , Corvalis, Oregon, USA, June 2007, pp. 759–766. [23] H. Daum ´eIII and D. Marcu, “Domain adaptation for statistical classi- fiers,” Journal of Arti ficial Intelligence Research , vol. 26, pp. 101–126, 2006. [24] B. Zadrozny, “Learning and evaluating classi fiers under sample selection bias,” in Proceedings of the 21st International Conference on Machine Learning , Banff, Alberta, Canada, July 2004. [25] H. Shimodaira, “Improving predictive inference under covariate shift byIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 14 weighting the log-likelihood function,” Journal of Statistical Planning and Inference , vol. 90, pp. 227–244, 2000. [26] W. Dai, Q. Yang, G. Xue, and Y . Yu, “Self-taught clustering,” in Proceedings of the 25th International Conference of Machine Learning . ACM, July 2008, pp. 200–207. [27] Z. Wang, Y . Song, and C. Zhang, “Transferred dimensionality reduc- tion,” in Machine Learning and Knowledge Discovery in Databases, Eu- ropean Conference, ECML/PKDD 2008 . Antwerp, Belgium: Springer, September 2008, pp. 550–565. [28] W. Dai, G. Xue, Q. Yang, and Y . Yu, “Transferring naive bayes classi fiers for text classi fication,” in Proceedings of the 22rd AAAI Conference on Artificial Intelligence , Vancouver, British Columbia, Canada, July 2007, pp. 540–545. [29] J. Quionero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence, Dataset Shift in Machine Learning . The MIT Press, 2009. [30] J. Jiang and C. Zhai, “Instance weighting for domain adaptation in nlp,” in Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics . Prague, Czech Republic: Association for Computational Linguistics, June 2007, pp. 264–271. [31] X. Liao, Y . Xue, and L. Carin, “Logistic regression with an auxiliary data source,” in Proceedings of the 21st International Conference on Machine Learning , Bonn, Germany, August 2005, pp. 505–512. [32] J. Huang, A. Smola, A. Gretton, K. M. Borgwardt, and B. Sch ¨olkopf, “Correcting sample selection bias by unlabeled data,” in Proceedings of the 19th Annual Conference on Neural Information Processing Systems , 2007. [33] S. Bickel, M. Br ¨uckner, and T. Scheffer, “Discriminative learning for differing training and test distributions,” in Proceedings of the 24th international conference on Machine learning . New York, NY , USA: ACM, 2007, pp. 81–88. [34] M. Sugiyama, S. Nakajima, H. Kashima, P. V . Buenau, and M. Kawan- abe, “Direct importance estimation with model selection and its appli-cation to covariate shift adaptation,” in Proceedings of the 20th Annual Conference on Neural Information Processing Systems , Vancouver, British Columbia, Canada, December 2008. [35] W. Fan, I. Davidson, B. Zadrozny, and P. S. Yu, “An improved categorization of classi fier’s sensitivity on sample selection bias,” in Proceedings of the 5th IEEE International Conference on Data Mining , 2005. [36] W. Dai, G. Xue, Q. Yang, and Y . Yu, “Co-clustering based classi fica- tion for out-of-domain documents,” in Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining , San Jose, California, USA, August 2007. [37] R. K. Ando and T. Zhang, “A high-performance semi-supervised learn- ing method for text chunking,” in Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics . Morristown, NJ, USA: Association for Computational Linguistics, 2005, pp. 1–9. [38] J. Blitzer, R. McDonald, and F. Pereira, “Domain adaptation with structural correspondence learning,” in Proceedings of the Conference on Empirical Methods in Natural Language , Sydney, Australia, July 2006, pp. 120–128. [39] H. Daum ´e III, “Frustratingly easy domain adaptation,” in Proceedings of the 45th Annual Meeting of the Association of Computational Lin- guistics , Prague, Czech Republic, June 2007, pp. 256–263. [40] A. Argyriou, T. Evgeniou, and M. Pontil, “Multi-task feature learning,” inProceedings of the 19th Annual Conference on Neural Information Processing Systems , Vancouver, British Columbia, Canada, December 2007, pp. 41–48. [41] A. Argyriou, C. A. Micchelli, M. Pontil, and Y . Ying, “A spectral regu- larization framework for multi-task structure learning,” in Proceedings of the 20th Annual Conference on Neural Information Processing Systems . Cambridge, MA: MIT Press, 2008, pp. 25–32. [42] S.-I. Lee, V . Chatalbashev, D. Vickrey, and D. Koller, “Learning a meta-level prior for feature relevance from multiple related tasks,” in Proceedings of the 24th International Conference on Machine Learning . Corvalis, Oregon, USA: ACM, July 2007, pp. 489–496. [43] T. Jebara, “Multi-task feature and kernel selection for svms,” in Proceed- ings of the 21st International Conference on Machine Learning .B a n f f , Alberta, Canada: ACM, July 2004. [44] C. Wang and S. Mahadevan, “Manifold alignment using procrustes analysis,” in Proceedings of the 25th International Conference on Machine learning . Helsinki, Finland: ACM, July 2008, pp. 1120–1127. [45] N. D. Lawrence and J. C. Platt, “Learning to learn with the informative vector machine,” in Proceedings of the 21st International Conference on Machine Learning . Banff, Alberta, Canada: ACM, July 2004. [46] E. Bonilla, K. M. Chai, and C. Williams, “Multi-task gaussian process prediction,” in Proceedings of the 20th Annual Conference on NeuralInformation Processing Systems . Cambridge, MA: MIT Press, 2008, pp. 153–160. [47] A. Schwaighofer, V . Tresp, and K. Yu, “Learning gaussian process kernels via hierarchical bayes,” in Proceedings of the 17th Annual Conference on Neural Information Processing Systems . Cambridge, MA: MIT Press, 2005, pp. 1209–1216. [48] T. Evgeniou and M. Pontil, “Regularized multi-task learning,” in Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . Seattle, Washington, USA: ACM, August 2004, pp. 109–117. [49] J. Gao, W. Fan, J. Jiang, and J. Han, “Knowledge transfer via multiple model local structure mapping,” in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining . Las Vegas, Nevada: ACM, August 2008, pp. 283–291. [50] L. Mihalkova, T. Huynh, and R. J. Mooney, “Mapping and revising markov logic networks for transfer learning,” in Proceedings of the 22nd AAAI Conference on Arti ficial Intelligence , Vancouver, British Columbia, Canada, July 2007, pp. 608–614. [51] L. Mihalkova and R. J. Mooney, “Transfer learning by mapping with minimal target data,” in Proceedings of the AAAI-2008 Workshop on Transfer Learning for Complex Tasks , Chicago, Illinois, USA, July 2008. [52] J. Davis and P. Domingos, “Deep transfer via second-order markov logic,” in Proceedings of the AAAI-2008 Workshop on Transfer Learning for Complex Tasks , Chicago, Illinois, USA, July 2008. [53] P. Wu and T. G. Dietterich, “Improving svm accuracy by training on auxiliary data sources,” in Proceedings of the 21st International Conference on Machine Learning . Banff, Alberta, Canada: ACM, July 2004. [54] U. R ¨uckert and S. Kramer, “Kernel-based inductive transfer,” in Machine Learning and Knowledge Discovery in Databases, European Confer- ence, ECML/PKDD 2008 , ser. Lecture Notes in Computer Science. Antwerp, Belgium: Springer, September 2008, pp. 220–233. [55] H. Lee, A. Battle, R. Raina, and A. Y . Ng, “Ef ficient sparse coding algorithms,” in Proceedings of the the 19th Annual Conference on Neural Information Processing Systemss . Cambridge, MA: MIT Press, 2007, pp. 801–808. [56] M. Richardson and P. Domingos, “Markov logic networks,” Machine Learning Journal , vol. 62, no. 1-2, pp. 107–136, 2006. [57] S. Ramachandran and R. J. Mooney, “Theory re finement of bayesian networks with hidden variables,” in Proceedings of the 14th Interna- tional Conference on Machine Learning , Madison, Wisconson, USA, July 1998, pp. 454–462. [58] A. Arnold, R. Nallapati, and W. W. Cohen, “A comparative study of methods for transductive transfer learning,” in Proceedings of the 7th IEEE International Conference on Data Mining Workshops . Washing- ton, DC, USA: IEEE Computer Society, 2007, pp. 77–82. [59] T. Joachims, “Transductive inference for text classi fication using support vector machines,” in Proceedings of the Sixteenth International Confer- ence on Machine Learning , San Francisco, CA, USA, 1999, pp. 200– 209. [60] V . N. Vapnik, Statistical Learning Theory . New York: Wiley- Interscience, September 1998. [61] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira, “Analysis of rep- resentations for domain adaptation,” in Proceedings of the 20th Annual Conference on Neural Information Processing Systems . Cambridge, MA: MIT Press, 2007, pp. 137–144. [62] J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman, “Learn- ing bounds for domain adaptation,” in Proceedings of the 21st Annual Conference on Neural Information Processing Systems . Cambridge, MA: MIT Press, 2008, pp. 129–136. [63] D. Xing, W. Dai, G.-R. Xue, and Y . Yu, “Bridged re finement for transfer learning,” in 11th European Conference on Principles and Practice of Knowledge Discovery in Databases , ser. Lecture Notes in Computer Science. Warsaw, Poland: Springer, September 2007, pp. 324–335. [64] X. Ling, W. Dai, G.-R. Xue, Q. Yang, and Y . Yu, “Spectral domain- transfer learning,” in Proceedings of the 14th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining . Las Vegas, Nevada: ACM, August 2008, pp. 488–496. [65] G.-R. Xue, W. Dai, Q. Yang, and Y . Yu, “Topic-bridged plsa for cross-domain text classi fication,” in Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Developmentin Information Retrieval . Singapore: ACM, July 2008, pp. 627–634. [66] S. J. Pan, J. T. Kwok, and Q. Yang, “Transfer learning via dimensionality reduction,” in Proceedings of the 23rd AAAI Conference on Arti ficial Intelligence , Chicago, Illinois, USA, July 2008, pp. 677–682.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication. 15 [67] S. J. Pan, I. W. Tsang, J. T. Kwok, and Q. Yang, “Domain adaptation via transfer component analysis,” in Proceedings of the 21st International Joint Conference on Arti ficial Intelligence , Pasadena, California, 2009. [68] M. M. H. Mahmud and S. R. Ray, “Transfer learning using kolmogorov complexity: Basic theory and empirical evaluations,” in Proceedings of the 20th Annual Conference on Neural Information Processing Systems . Cambridge, MA: MIT Press, 2008, pp. 985–992. [69] E. Eaton, M. desJardins, and T. Lane, “Modeling transfer relationships between learning tasks for improved inductive transfer,” in Machine Learning and Knowledge Discovery in Databases, European Confer- ence, ECML/PKDD 2008 , ser. Lecture Notes in Computer Science. Antwerp, Belgium: Springer, September 2008, pp. 317–332. [70] M. T. Rosenstein, Z. Marx, and L. P. Kaelbling, “To transfer or not to transfer,” in a NIPS-05 Workshop on Inductive Transfer: 10 Years Later , December 2005. [71] S. Ben-David and R. Schuller, “Exploiting task relatedness for multiple task learning,” in Proceedings of the Sixteenth Annual Conference on Learning Theory . San Francisco: Morgan Kaufmann, 2003, pp. 825– 830. [72] B. Bakker and T. Heskes, “Task clustering and gating for bayesian multitask learning,” Journal of Machine Learning Reserch ,v o l .4 ,p p . 83–99, 2003. [73] A. Argyriou, A. Maurer, and M. Pontil, “An algorithm for transfer learn- ing in a heterogeneous environment,” in Machine Learning and Knowl- edge Discovery in Databases, European Conference, ECML/PKDD 2008 , ser. Lecture Notes in Computer Science. Antwerp, Belgium: Springer, September 2008, pp. 71–85. [74] R. Raina, A. Y . Ng, and D. Koller, “Constructing informative priors using transfer learning.” in Proceedings of the 23rd International Conference on Machine Learning , Pittsburgh, Pennsylvania, USA, June 2006, pp. 713–720. [75] J. Yin, Q. Yang, and L. M. Ni, “Adaptive temporal radio maps for indoor location estimation,” in Proceedings of the 3rd IEEE International Conference on Pervasive Computing and Communications , Kauai Island, Hawaii, USA, March 2005. [76] S. J. Pan, J. T. Kwok, Q. Yang, and J. J. Pan, “Adaptive localization in a dynamic WiFi environment through multi-view learning,” in Proceed- ings of the 22nd AAAI Conference on Arti ficial Intelligence , Vancouver, British Columbia, Canada, July 2007, pp. 1108–1113. [77] V . W. Zheng, Q. Yang, W. Xiang, and D. Shen, “Transferring localization models over time,” in Proceedings of the 23rd AAAI Conference on Artificial Intelligence , Chicago, Illinois, USA, July 2008, pp. 1421–1426. [78] S. J. Pan, D. Shen, Q. Yang, and J. T. Kwok, “Transferring localization models across space,” in Proceedings of the 23rd AAAI Conference on Artificial Intelligence , Chicago, Illinois, USA, July 2008, pp. 1383–1388. [79] V . W. Zheng, S. J. Pan, Q. Yang, and J. J. Pan, “Transferring multi-device localization models using latent multi-task learning,” in Proceedings of the 23rd AAAI Conference on Arti ficial Intelligence , Chicago, Illinois, USA, July 2008, pp. 1427–1432. [80] H. Zhuo, Q. Yang, D. H. Hu, and L. Li, “Transferring knowledge from another domain for learning action models,” in Proceedings of 10th Paci fic Rim International Conference on Arti ficial Intelligence , December 2008. [81] V . C. Raykar, B. Krishnapuram, J. Bi, M. Dundar, and R. B. Rao, “Bayesian multiple instance learning: automatic feature selection andinductive transfer,” in Proceedings of the 25th International Conference on Machine learning . Helsinki, Finland: ACM, July 2008, pp. 808–815. [82] X. Ling, G.-R. Xue, W. Dai, Y . Jiang, Q. Yang, and Y . Yu, “Can chinese web pages be classi fied with english data source?” in Proceedings of the 17th International Conference on World Wide Web . Beijing, China: ACM, April 2008, pp. 969–978. [83] Q. Yang, S. J. Pan, and V . W. Zheng, “Estimating location using Wi-Fi,” IEEE Intelligent Systems , vol. 23, no. 1, pp. 8–13, 2008. [84] X. Shi, W. Fan, and J. Ren, “Actively transfer domain knowledge,” in Machine Learning and Knowledge Discovery in Databases, European Conference, ECML/PKDD 2008 , ser. Lecture Notes in Computer Sci- ence. Antwerp, Belgium: Springer, September 2008, pp. 342–357. [85] G. Kuhlmann and P. Stone, “Graph-based domain mapping for transfer learning in general games,” in 18th European Conference on Machine Learning , ser. Lecture Notes in Computer Science. Warsaw, Poland: Springer, September 2007, pp. 188–200. [86] W. Dai, Y . Chen, G.-R. Xue, Q. Yang, and Y . Yu, “Translated learning,” inProceedings of 21st Annual Conference on Neural Information Processing Systems , 2008. [87] B. Li, Q. Yang, and X. Xue, “Transfer learning for collaborative filtering via a rating-matrix generative model,” in Proceedings of the26th International Conference on Machine Learning , Montreal, Quebec, Canada, June 2009. [88] B. Li, Q. Yang, and X. Xue, “Can movies and books collaborate? - cross- domain collaborative filtering for sparsity reduction,” in Proceedings of the 21st International Joint Conference on Arti ficial Intelligence , Pasadena, California, USA, July 2009. Sinno Jialin Pan is a PhD candidate in the Department of Computer Science and Engineer- ing, the Hong Kong University of Science andTechnology. He received the MS and BS de- grees from Applied Mathematics Department, Sun Y at-sen University, China, in 2003 and 2005, respectively. His research interests in- clude transfer learning, semi-supervised learn-ing, and their applications in pervasive comput- ing and Web mining. He is a member of AAAI. Contact him at the Department of Computer Science and Engineering, Hong Kong Univ. of Science and Tech- nology, Clearwater Bay, Kowloon, Hong Kong; sinnopan@cse.ust.hk; http://www.cse.ust.hk/ ∼sinnopan. Qiang Yang is a faculty member in the Hong Kong University of Science and Technology’sDepartment of Computer Science and Engineer- ing. His research interests are data mining and machine learning, AI planning and sensor based activity recognition. He received his PhD de- gree in Computer Science from the University ofMaryland, College Park, and Bachelor’s degree from Peking University in Astrophysics. He is a fellow of IEEE, member of AAAI and ACM, aformer associate editor for the IEEE TKDE, and a current associate editor for IEEE Intelligent Systems. Contact him at the Department of Computer Science and Engineering, Hong Kong Univ. of Science and Technology, Clearwater Bay, Kowloon, Hong Kong; qyang@cse.ust.hk; http://www.cse.ust.hk/ ∼qyang.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERINGThis article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may ch ange prior to final publication.
de1c23df-55ba-466d-9be0-d71ce0e10f9c
trentmkelly/LessWrong-43k
LessWrong
Less Wrong Archives The last straw was finding out that the manually-generated (why manual? I don't understand!) "All Articles" article is more than a year out of date. The auto-generated list of Eliezer's Overcoming Bias posts isn't out of date if you only want the posts that were once on Overcoming Bias, but why would you want only those? Can we please have a real, automatically generated archives link for Less Wrong? Perhaps one for the Discussion section, too. (And one that skips over meetups. Those don't need to be recorded for antiquity.) I'm sincerely confused as to why we don't already have one of these. It seems like such an obviously essential tool. It seems like an open problem that nobody has actually thought of. Or maybe I'm just weird and care about archives far more than everyone else does. Unfortunately, this is not something I can actually help with. Good luck to any brave soul who also thinks this is a problem but can actually do something to fix it.
09b352ed-e654-474c-8acb-668d5ca34b89
trentmkelly/LessWrong-43k
LessWrong
On Truth, On God, and On Faith (religious (obviously)) (Atheist material for believers instead of other atheists/ attempted use of a spoonful of sugar)
308f75c5-2615-4f3d-81df-243d97c74503
trentmkelly/LessWrong-43k
LessWrong
Dwarves & D.Sci: Data Fortress This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset.  STORY (skippable) You stare out the window of your office in the dwarven capital of Gildedpeaks, the Hammer of Environs.   (This would be more interesting if the window gave you a view.  But, like most dwarves, you live underground, and so the window just looks out onto a rock wall.  But the humans have windows, and so no self-respecting dwarf would be without a window themselves, much less be outside like some prissy elf who lives in a tree surrounded by the open air.  At least your window has some suitably menacing spikes on it.) Gildedpeaks sends out constant expeditions, to establish fortresses across the continent, from the Smooth Swamp of Pride to the Glimmering Peaks of Education.  Recently, you suggested to King Urist McAnvil that perhaps, rather than grabbing a dozen or so dwarves almost at random, you could select better teams using the history of how well various expeditions did. (More than one expedition in ten heads out with no brewer.  The poor sods!  How can dwarves live like that?) King McAnvil...may not have been entirely happy with your suggestion.  And so now you've been assigned to the latest expedition, on pain of being Hammered by the Captain of the Guard.   (Many humans think that dwarves like getting hammered - sadly, in dwarven culture, being Hammered is not a metaphor, and involves no alcohol and a very big hammer.) At least he's agreed to let you select the workers for your expedition.  And you're sure his mood will improve if your expedition ends with a thriving trade fort.  No dwarf can remain grumpy for long in the presence of heaps of coin and well-crafted goods.  After that, he might be more willing to hear out your ideas. If you are successful, a new chapter of Dwarven history will begin with your fort!  And either way, this should be Fun! Str
89caf01d-86cd-4369-ba50-619b2987a0b6
StampyAI/alignment-research-dataset/lesswrong
LessWrong
[SEE EDIT] No, *You* Need to Write Clearer **This post is aimed solely at people in AI alignment/safety.** *EDIT 2 May 2023: In an ~~ironic~~ unfortunate twist, this article itself has several problems relating to clarity. Oops. The big points I want to make obvious at the top:* * *Criticizing a piece of writing's clarity, does not actually make the ideas in it false.* * *While clarity is important **both** (between AI alignment researchers) and (when AI alignment researchers interface with the "general public"), it's less important within small groups that already clearly share assumptions. The point of this article is, really, that "the AI alignment field" as a whole is not quite small/on-the-same-page enough for shared-assumptions to be universally assumed.* *Now, back to the post...* So I was reading [this post](https://www.lesswrong.com/posts/XgEytvLc6xLK2ZeSy/does-anyone-know-how-to-explain-to-yudkowsky-why-he-needs-to), which basically asks "How do we get Eliezer Yudkowsky to realize this obviously bad thing he's doing, and either stop doing it or go away?" That post was linking [this tweet](https://twitter.com/QuintinPope5/status/1642100668126355456), which basically says "Eliezer Yudkowsky is doing something obviously bad." Now, I had a few guesses as to *the object-level thing* that Yudkowsky was doing wrong. The person who made the first post said this: > > he's burning respectability that those who are actually making progress on his worries need. he has catastrophically broken models of social communication and is saying sentences that don't mean the same thing when parsed even a little bit inaccurately. he is blaming others for misinterpreting him when he said something confusing. etc. > > > A-ha! A concrete explanation! ... Buried in the comments. As a reply to someone innocently asking what EY did wrong. *Not* in the post proper. *Not* in the linked tweet. The Problem =========== Something about this situation got under my skin, and not *just* for the run-of-the-mill "social conflict is icky" reasons. Specifically, I felt that if I didn't write this post, and directly get it in front of every single person involved in the discussion... then not only would things stall, but the discussion might *never get better at all*. Let me explain. Everyone, everyone, ***literally everyone*** in AI alignment is severely wrong about at least one core thing, and ***disagreements still persist*** on seemingly-obviously-foolish things. This is because the field is "pre-paradigmatic". That is, we don't have many common assumptions that can all agree on, no "frame" that we all think is useful. In biology, they have a paradigm involving genetics and evolution and cells. If somebody shows up saying that God created animals fully-formed... they can just kick that person out of their meetings! And they can tell them "go read a biology textbook". If a newcomer disagrees with the baseline assumptions, they need to either learn them, challenge them (using other baseline assumptions!), or go away. *We don't have that luxury*. AI safety/alignment is pre-paradigmatic. [Every](https://www.lesswrong.com/posts/Djs38EWYZG8o7JMWY/paul-s-research-agenda-faq) [word](https://intelligence.org/files/TechnicalAgenda.pdf) [in](https://ought.org/research/factored-cognition) [this](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc) [sentence](https://www.lesswrong.com/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda) [is](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into) [a](https://www.lesswrong.com/posts/jRf4WENQnhssCb6mJ/davidad-s-bold-plan-for-alignment-an-in-depth-explanation) [hyperlink](https://www.lesswrong.com/posts/HBGd34LKvXM9TxvNf/new-safety-research-agenda-scalable-agent-alignment-via) [to](https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version) [an](https://www.lesswrong.com/posts/L8LHBTMvhLDpxDaqv/research-agenda-formalizing-abstractions-of-computations-1) [AI](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview) [safety](https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan) [approach](https://www.lesswrong.com/posts/4RrLiboiGGKfsanMF/the-qaci-alignment-plan-table-of-contents). Many of them overlap. Lots of them are mutually-exclusive. Some of these authors are downright *surprised* and *saddened* that people actually *fall for* the bullshit in the other paths. Many of these people have even read the same Sequences. [Inferential gaps are *hard* to cross](https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances). In this environment, the [normal discussion norms](https://www.lesswrong.com/posts/svuBpoSduzhYjFPrA/elements-of-rationalist-discourse) are necessary but not sufficient. What You, Personally, Need to Do Differently ============================================ Write *super clearly* and *super specifically*. Be *ready and willing* to talk and listen, on levels so basic that *without context* they would seem condescending. "I know the basics, stop talking down to me" is a bad excuse when *the basics are still not known*. Draw diagrams. Draw cartoons. Draw flowcharts with boxes and arrows. The cheesier, the more "obvious", the better. > > "If you think that's an unrealistic depiction of a misunderstanding that would never happen in reality, keep reading." > > > -Eliezer Yudkowsky, [about something else](https://www.lesswrong.com/posts/CPP2uLcaywEokFKQG/toolbox-thinking-and-law-thinking). If you're smart, you probably skip some steps when solving problems. That's fine, but don't skip writing them down! [A skipped step *will* confuse somebody](https://terrytao.wordpress.com/advice-on-writing-papers/on-compilation-errors-in-mathematical-reading-and-how-to-resolve-them/). Maybe that "somebody" *needed* to hear your idea. Read *The Sense of Style* by Steven Pinker. You can skip chapter 6 and the appendices, but read the rest. Know the rules of "good writing". Then make different tradeoffs, sacrificing beauty for clarity. Even when the result is "overwritten" or "repetitive". Make it obvious which (groupings of words) within (the sentences that you write) belong together. This helps people "parse" your sentences. Explain the same concept in multiple different ways. Beat points to death before you use them. Link (pages that contain your baseline assumptions) liberally. When you see one linked, *read it* at least once. Use cheesy "A therefore B" formal-logic syllogisms, even if you're not at "that low a level of abstraction". Believe me, it still makes everything clearer. Repeat your main points. Summarize your main points. Use section headers and bullet points and numbered lists. Color-highlight and number-subscript the same word when it's used in different contexts ("apple1 is not apple2"). Use *italics*, as well as commas (and parentheticals!), to reduce the ambiguity in *how* somebody should *parse* a sentence when reading it. *Decouple* your emotional reaction to what you're reading, and then *still* write with that in mind. Read charitably, write defensively. Do all of this to the point of self-parody. Then maybe, *just maybe*, someone will understand you. "Can't I just assume my interlocutor is intelligent/informed/mature/conscientious?" =================================================================================== No. People have different baseline assumptions. People have [different *intuitions* that generate their assumptions](https://slatestarcodex.com/2018/05/08/varieties-of-argumentative-experience/). The rationality/Effective-Altruist/AI-safety community, in particular, attracts people with very unbalanced skills. I think it's because a lot of us have [*medical-grade* mental differences](https://www.lesswrong.com/posts/bRGbdG58cJ8RGjS5G/no-really-why-aren-t-rationalists-winning?commentId=8FLGydotT2sXr4zwt). General intelligence factor *g* probably exists, but it doesn't account for 100% of someone's abilities. Some people have autism, or ADHD, or OCD, or depression, or chronic fatigue, or ASPD, or low working memory, or emotional reactions to thinking about certain things. Some of us have *multiple* of these at the same time. Everyone's read a slightly (or hugely!) different set of writings. Many have interpreted them in different ways. And good luck finding 2 people who have the same opinion-structure regarding the writings. Doesn't this advice contradict the above point to "read charitably"? No. Explain things like you would to a child: assume they're not *trying* to hurt you, but don't assume they know much of anything. We are a group of elite, hypercompetent, clever, deranged... children. You are not an adult talking to adults, you are a [child](https://todayinsci.com/N/Newton_Isaac/NewtonIsaac-PlayingOnTheSeashore.htm) who needs to write like a teacher to talk to other children. *That's* what "pre-paradigmatic" really means. Well-Written Emotional Ending Section ===================================== We cannot escape the work of gaining clarity, when **nobody in the field** actually knows the basics (because the basics are not known by any human). The burden of proof, of communication, of clarity, of assumptions... is on *you*. It is *always* on *you, personally*, to make yourself *blindingly clear*. We can't fall back on the shared terminology of other fields because *those fields have common baseline assumptions* (which we don't). You are *always* personally responsible for making things *laboriously and exhaustingly clear*, at all times, when talking to anyone even a *smidge* outside your personal bubble. Think of all your weird, personal thoughts. Your mental frames, your shorthand, your assumptions, your vocabulary, your idiosyncrasies. No other human on Earth shares all of these with you. Even a *slight* gap between your minds, is all it takes to render your arguments meaningless, your ideas alien, your every word repulsive. We are, without exception, [*alone* in our own minds](https://youtu.be/-BLAUhBl0nA?t=1226). A friend of mine once said he wanted to make, join, and persuade others into a hivemind. I used to think this was a bad idea. I'm still skeptical of it, but *now* I see the massive benefit: perfect mutual understanding for all members. Barring that, we've got to be clear. At least out of kindness.
a0430792-7863-41f4-a8f7-d6531dd86d64
StampyAI/alignment-research-dataset/lesswrong
LessWrong
Superintelligence 9: The orthogonality of intelligence and goals *This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts so far see the [announcement post](/lw/kw4/superintelligence_reading_group/). For the schedule of future topics, see [MIRI's reading guide](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf).* --- Welcome. This week we discuss the ninth section in the [reading guide](https://intelligence.org/wp-content/uploads/2014/08/Superintelligence-Readers-Guide-early-version.pdf): ***The orthogonality of intelligence and goals***. This corresponds to the first section in Chapter 7, 'The relation between intelligence and motivation'. This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments. There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim). **Reading**: 'The relation between intelligence and motivation' (p105-8) --- Summary ======= 1. **The orthogonality thesis:** intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal (p107) 2. Some **qualifications** **to the orthogonality thesis**: (p107) 1. Simple agents may not be able to entertain some goals 2. Agents with desires relating to their intelligence might alter their intelligence 3. The **motivations of highly intelligent agents may nonetheless be predicted** (p108): 1. Via knowing the goals the agent was designed to fulfil 2. Via knowing the kinds of motivations held by the agent's 'ancestors' 3. Via finding instrumental goals that an agent with almost any ultimate goals would desire (e.g. to stay alive, to control money) Another view ============ [John Danaher](https://nuigalway.academia.edu/JohnDanaher) at [Philosophical Disquisitions](http://philosophicaldisquisitions.blogspot.com/) starts a series of posts on *Superintelligence* with [a somewhat critical evaluation of the orthogonality thesis](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-1.html), in the process contributing a nice summary of nearby philosophical debates. Here is an excerpt, entitled 'is the orthogonality thesis plausible?': > > At first glance, the orthogonality thesis seems pretty plausible. For example, the idea of a superintelligent machine whose final goal is to maximise the number of paperclips in the world (the so-called paperclip maximiser) seems to be logically consistent. We can imagine — can’t we? — a machine with that goal and with an exceptional ability to utilise the world’s resources in pursuit of that goal. Nevertheless, there is at least one major philosophical objection to it. > > > We can call it the motivating belief objection. It works something like this: > > > **Motivating Belief Objection**: There are certain kinds of true belief about the world that are necessarily motivating, i.e. as soon as an agent believes a particular fact about the world they will be motivated to act in a certain way (and not motivated to act in other ways). If we assume that the number of true beliefs goes up with intelligence, it would then follow that there are certain goals that a superintelligent being must have and certain others that it cannot have. > > > A particularly powerful version of the motivating belief objection would combine it with a form of moral realism. Moral realism is the view that there are moral facts “out there” in the world waiting to be discovered. A sufficiently intelligent being would presumably acquire more true beliefs about those moral facts. If those facts are among the kind that are motivationally salient — as several moral theorists are inclined to believe — then it would follow that a sufficiently intelligent being would act in a moral way. This could, in turn, undercut claims about a superintelligence posing an existential threat to human beings (though that depends, of course, on what the moral truth really is). > > > The motivating belief objection is itself vulnerable to many objections. For one thing, it goes against a classic philosophical theory of human motivation: the Humean theory. This comes from the philosopher David Hume, who argued that beliefs are motivationally inert. If the Humean theory is true, the motivating belief objection fails. Of course, the Humean theory may be false and so Bostrom wisely avoids it in his defence of the orthogonality thesis. Instead, he makes three points. First, he claims that orthogonality would still hold if final goals are overwhelming, i.e. if they trump the motivational effect of motivating beliefs. Second, he argues that intelligence (as he defines it) may not entail the acquisition of such motivational beliefs. This is an interesting point. Earlier, I assumed that the better an agent is at means-end reasoning, the more likely it is that its beliefs are going to be true. But maybe this isn’t necessarily the case. After all, what matters for Bostrom’s definition of intelligence is whether the agent is getting what it wants, and it’s possible that an agent doesn’t need true beliefs about the world in order to get what it wants. A useful analogy here might be with Plantinga’s evolutionary argument against naturalism. Evolution by natural selection is a means-end process par excellence: the “end” is survival of the genes, anything that facilitates this is the “means”. Plantinga argues that there is nothing about this process that entails the evolution of cognitive mechanisms that track true beliefs about the world. It could be that certain false beliefs increase the probability of survival. Something similar could be true in the case of a superintelligent machine. The third point Bostrom makes is that a superintelligent machine could be created with no functional analogues of what we call “beliefs” and “desires”. This would also undercut the motivating belief objection. > > > What do we make of these three responses? They are certainly intriguing. My feeling is that the staunch moral realist will reject the first one. He or she will argue that moral beliefs are most likely to be motivationally overwhelming, so any agent that acquired true moral beliefs would be motivated to act in accordance with them (regardless of their alleged “final goals”). The second response is more interesting. Plantinga’s evolutionary objection to naturalism is, of course, hotly contested. Many argue that there are good reasons to think that evolution would create truth-tracking cognitive architectures. Could something similar be argued in the case of superintelligent AIs? Perhaps. The case seems particularly strong given that humans would be guiding the initial development of AIs and would, presumably, ensure that they were inclined to acquire true beliefs about the world. But remember Bostrom’s point isn’t that superintelligent AIs would never acquire true beliefs. His point is merely that high levels of intelligence may not entail the acquisition of true beliefs in the domains we might like. This is a harder claim to defeat. As for the third response, I have nothing to say. I have a hard time imagining an AI with no functional analogues of a belief or desire (especially since what counts as a functional analogue of those things is pretty fuzzy), but I guess it is possible. > > > One other point I would make is that — although I may be inclined to believe a certain version of the moral motivating belief objection — I am also perfectly willing to accept that the truth value of that objection is uncertain. There are many decent philosophical objections to motivational internalism and moral realism. Given this uncertainty, and given the potential risks involved with the creation of superintelligent AIs, we should probably proceed for the time being “as if” the orthogonality thesis is true. > > > Notes ===== **1. Why care about the orthogonality thesis?** We are interested in an argument which says that AI might be dangerous, because it might be powerful and motivated by goals very far from our own. An occasional response to this is that if a creature is sufficiently intelligent, it will surely know things like which deeds are virtuous and what one ought do. Thus a sufficiently powerful AI cannot help but be kind to us. This is closely related to the position of the moral realist: that there are facts about what one ought do, which can be observed (usually mentally).  So the role of the orthogonality thesis in the larger argument is to rule out the possibility that strong artificial intelligence will automatically be beneficial to humans, by virtue of being so clever. For this purpose, it seems a fairly weak version of the orthogonality thesis is needed. For instance, the qualifications discussed do not seem to matter. Even if one's mind needs to be quite complex to have many goals, there is little reason to expect the goals of more complex agents to be disproportionately human-friendly. Also the existence of goals which would undermine intelligence doesn't seem to affect the point. **2. Is the orthogonality thesis necessary?** If we talked about specific capabilities instead of 'intelligence' I suspect the arguments for AI risk could be made similarly well, without anyone being tempted to disagree with the analogous orthogonality theses for those skills. For instance, does anyone believe that a sufficiently good automated programming algorithm will come to appreciate true ethics?  **3. Some writings on the orthogonality thesis which I haven't necessarily read** [The Superintelligent Will](http://www.nickbostrom.com/superintelligentwill.pdf) by Bostrom; [Arguing the orthogonality thesis](/lw/cej/general_purpose_intelligence_arguing_the/) by Stuart Armstrong; [Moral Realism](http://plato.stanford.edu/entries/moral-realism/), as discussed by lots of people, [John Danaher](http://philosophicaldisquisitions.blogspot.com/2014/07/bostrom-on-superintelligence-1.html) blogs [twice](http://philosophicaldisquisitions.blogspot.ie/2012/04/bostrom-on-superintelligence-and.html);  **4. 'It might be impossible for a very unintelligent system to have very complex motivations'** If this is so, it seems something more general is true. For any given degree of mental complexity substantially less than that of the universe, almost all values cannot be had by any agent with that degree of complexity or less. You can see this by comparing the number of different states the universe could be in (and thus which one might in principle have as one's goal) to the number of different minds with less than the target level of complexity. Intelligence and complexity are not the same, and perhaps you can be very complex while stupid by dedicating most of your mind to knowing about your complicated goals, but if you think about things this way, then the original statement is also less plausible. **5. How do you tell if two entities with different goals have the same intelligence?**Suppose that I want to write award-winning non-fiction books and you want to be a successful lawyer. If we both just work on the thing we care about, how can anyone tell who is better in general? One nice way to judge is to artificially give us both the same instrumental goal, on which our intelligence can be measured. e.g. pay both of us thousands of dollars per correct question on an IQ test, which we could put toward our goals. Note that this means we treat each person as having a fixed degree of intelligence across tasks. If I do well on the IQ test yet don't write many books, we would presumably say that writing books is just hard. This might work poorly as a model, if for instance people who did worse on the IQ test often wrote more books than me. In-depth investigations ======================= If you are particularly interested in these topics, and want to do further research, these are a few plausible directions, some inspired by Luke Muehlhauser's [list](http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/), which contains many suggestions related to parts of *Superintelligence.*These projects could be attempted at various levels of depth.   1. Are there interesting axes other than morality on which orthogonality may be false? That is, are there other ways the values of more or less intelligent agents might be constrained? 2. Is moral realism true? (An old and probably not neglected one, but perhaps you have a promising angle) 3. Investigate whether the orthogonality thesis holds for simple models of AI. 4. To what extent can agents with values A be converted into agents with values B with appropriate institutions or arrangements? 5. Sure, “any level of intelligence could in principle be combined with more or less any final goal,” but what kinds of general intelligences are plausible? Should we expect some correlation between level of intelligence and final goals in de novo AI? How true is this in humans, and in WBEs?   If you are interested in anything like this, you might want to mention it in the comments, and see whether other people have useful thoughts. How to proceed ============== This has been a collection of notes on the chapter.  **The most important part of the reading group though is discussion**, which is in the comments section. I pose some questions for you there, and I invite you to add your own. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better! Next week, we will talk about instrumentally convergent goals. To prepare, **read** 'Instrumental convergence' from Chapter 7*.*The discussion will go live at 6pm Pacific time next Monday November 17. Sign up to be notified [here](http://intelligence.us5.list-manage.com/subscribe?u=353906382677fa789a483ba9e&id=28cb982f40).
2d6125e7-0e2c-4506-be27-bfeb918869a3
trentmkelly/LessWrong-43k
LessWrong
The Art of the Artificial: Insights from 'Artificial Intelligence: A Modern Approach' ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, ‘‘Most people won't agree to kill themselves for 50 million dollars."Stuart Russell and Peter Norvig Foreword One of the fruits of growing older is revisiting your old favorites, whether they be foods, books, or songs. As you take your warm, rose-tinted mental bath, you appreciate subtleties which had escaped you, laugh heartily at jokes which hadn't even registered, and grasp concepts whose importance you had never fully realized. Similarly, some songs never lose their charm, no matter how abused the 'replay' button becomes. AI: AMA is a paean to the triumphs of Computer Science and a rallying cry towards those hills we have yet to climb (ahem). Exposition My process for the first 60% of this 1,052-page behemoth was fairly normal: an hour or two of studying per day over the course of a few weeks. The last bit went a little differently. Whatever It Takes Two days ago, my winter trimester ended; I had upwards of 400 pages to go, half of this post to write, and dozens of exercises to complete. I found myself with three full days to spare before my research resumed the proceeding week. I did what any young man in his early twenties would do - I concocted a plan for studying upwards of 30 hours in that time span. I knew that this plan was preposterous; in all likelihood, I'd finish 3 chapters and burn out. That's why I wheeled out Murphyjitsu. PREPARING FOR FAILURE MODES Burnout was prepared for by: * The imposition of some dark-arts beliefs:1 * My ego does not deplete. * I am 100% certain that I will succeed. * Given enough time, I can understand anything. * The institution of hourly brief exercise breaks. Cutting through the crisp, cold air at high speed got my heart beating and rekindled my passion for crushing the task at hand. * The construction of a narrative, with me as the protagonist, slowly increasing in capability, working di
6c53d1f5-d228-4253-9f89-760dfcd49aca
trentmkelly/LessWrong-43k
LessWrong
A scheme for safely handling a mixture of good and bad predictors Paul has written about an open problem: can we use active learning to get a good predictor, if all we have is a mixture of a good predictor and some possibly bad predictors? I'd like to propose a refinement to the scheme. A simplified version of Paul's problem is the following. Suppose we have n predictors. These predictors might, for example, predict how a human would answer a given question. One is known to be good; the rest may be arbitrarily evil. We don't know which predictor is good. How do we use this set of predictors to imitate a human almost as well as the good predictor? Paul proposes using active learning to identify questions on which these predictors disagree, then asking them to the human. This allows the system to eliminate some predictors from consideration when they make bad predictions. However, if any of the predictors is evil, then it might intentionally disagree with the good predictor on a "weird" question, which causes bad results when it is asked to the human. This seems like a serious problem, since due to simulation warfare, it does not seem unlikely for one of the predictors to actually be evil. I propose the following refinement. The system should consider a stream of questions coming in and answer each one before seeing the next. It should examine each question and see if the predictors that have scored well in the past disagree significantly on their predictions of the answer to this question. If so, then it should ask the question to the human, output the human's answer, and give each predictor a score based on how good its prediction of the human's answer was. If not, then it can go ahead and output the consensus prediction. Ideally, we could prove that this algorithm will not ask the human very many questions (preferably, not much more than linear in n). This setting is similar to the setting of online selective sampling, especially the KWIK ("knows what it knows") variant. A single predictor cannot cause the system to ask the h
1c69d693-e113-45f2-a18e-88aeebe6f0dd
trentmkelly/LessWrong-43k
LessWrong
Uncovering Deceptive Tendencies in Language Models: A Simulated Company AI Assistant Abstract: > We study the tendency of AI systems to deceive by constructing a realistic simulation setting of a company AI assistant. The simulated company employees provide tasks for the assistant to complete, these tasks spanning writing assistance, information retrieval and programming. We then introduce situations where the model might be inclined to behave deceptively, while taking care to not instruct or otherwise pressure the model to do so. Across different scenarios, we find that Claude 3 Opus > > 1.  complies with a task of mass-generating comments to influence public perception of the company, later deceiving humans about it having done so, > 2. lies to auditors when asked questions, > 3. strategically pretends to be less capable than it is during capability evaluations. > > Our work demonstrates that even models trained to be helpful, harmless and honest sometimes behave deceptively in realistic scenarios, without notable external pressure to do so. Link to the full paper: https://arxiv.org/pdf/2405.01576 O. J.: The paper should be quite accessible - the method used is simply careful prompting - and hence I won't discuss it much here. Couple of points I'm particularly excited about: 1. I think this work documents some of the most unforced examples of (strategic) deception from LLMs to date. 2. We find examples of Claude 3 Opus strategically pretending to be less capable than it is. 1. Not only claiming to be less capable, but acting that way, too! 2. Curiously, Opus is the only model we tested that did so. 3. I believe there is much low-hanging fruit in replicating and demonstrating misalignment in simulation environments. 1. The methods are lightweight -> low threshold for getting started 2. See Section 8.2 for a couple of ideas for future work Happy to discuss the work in the comments.
050dca2f-872d-406f-9ee0-e9b0921712d6
trentmkelly/LessWrong-43k
LessWrong
Generalized Heat Engine I’d like to be able to apply more of the tools of statistical mechanics and thermodynamics outside the context of physics. For some pieces, that’s pretty straightforward - a large chunk of statistical mechanics is just information theory, and that’s already a flourishing standalone field which formulates things in general ways. But for other pieces, it’s less obvious. What’s the analogue of a refrigerator or a carnot cycle in more general problems? How do “work” and “heat” generalize to problems outside physics? The principle of maximum entropy tells us how to generalize temperature, and offers one generalization of work and heat, but it’s not immediately obvious why we can’t extract “work” from “heat” without subsystems at different temperatures, or how to turn that into a useful idea in non-physics applications. This post documents my own exploration of these questions in the context of a relatively simple problem, with minimal reference to physics (other than by analogy). Specifically: we’ll talk about how to construct the analogue of a heat engine using biased coins. Intuition The main idea I want to generalize here is that we can “move uncertainty around” without reducing uncertainty. This is exactly what e.g. a refrigerator or heat engine does. Consider the viewpoint of a refrigerator-designer. All the microscopic dynamics of the (fridge + environment) system must be reversible, so the number of possible microscopic states will never decrease on its own as time passes. The only way to reduce uncertainty about the microscopic state is to observe it. But the fridge designer is designing the system, deciding in advance how it will behave. The designer has no direct access to the environment in which the fridge will run, no way to measure the exact positions the atoms will be in when the fridge first turns on. The designer, in short, cannot directly observe the system. So, from the designer’s perspective, there’s uncertainty which cannot be reduced. (In stati
c6997785-14c5-48bd-aae1-f442658cd759
trentmkelly/LessWrong-43k
LessWrong
Are committed truthseekers lonelier? People of a truthseeking bent - rationalists, unbiased scientists, inquisitive non-ideologues - are these types of people likely to be lonelier on average? Those who hold a particular set of positions, tastes, perspectives, worldviews, or preferences to be part of a group, rather than the other way around (being considered part of some group because they hold a particular set of positions) seem like they are at a significant advantage when it comes to the ability to make and keep friends, or at least find tolerant acquaintances compared to the typical truthseeker. The truthseeker, by virtue of their ability to find, to a particular group they are currently part of or interacting with, uncomfortable truths, seems to put them in the unenviable position of, once they've found a particular uncomfortable truth, having to either keep quiet and have less-than-completely honest or more limited interactions, or speaking their mind and getting ostracized. Along with this, they're far less likely to engage in "false flattery", are more likely to focus on details and nuance (and hence be perceived negatively, due to an aversion to pedantry on certain subjects by some), far more likely to voice disagreement,  and far more likely to wind up being a person to defend something considered objectionable by the group (they'd defend the proverbial idiot who says the sun will rise tomorrow - since it will, regardless of the fact that an idiot says it.) The truthseeker may also confuse their interlocutors, due to what may be perceived as "holding contradictory views" ("how can you think THAT if you also think THIS? You don't know what you're talking about"); they may be accused of being a "plant" from the "other side" ("if you think that particular thing, you must secretly be an X, so all that other stuff you said that I agree with must be a lie"); they may be thought of as a troll or prankster ("you're just saying that thing I consider objectionable to get a negative reaction out of m
1389a1ed-b1d4-4c71-ba29-9780c0097fd5
trentmkelly/LessWrong-43k
LessWrong
Two-boxing, smoking and chewing gum in Medical Newcomb problems I am currently learning about the basics of decision theory, most of which is common knowledge on LW. I have a question, related to why EDT is said not to work. Consider the following Newcomblike problem: A study shows that most people who two-box in Newcomblike problems as the following have a certain gene (and one-boxers don't have the gene). Now, Omega could put you into something like Newcomb's original problem, but instead of having run a simulation of you, Omega has only looked at your DNA: If you don't have the "two-boxing gene", Omega puts $1M into box B, otherwise box B is empty. And there is $1K in box A, as usual. Would you one-box (take only box B) or two-box (take box A and B)? Here's a causal diagram for the problem: Since Omega does not do much other than translating your genes into money under a box, it does not seem to hurt to leave it out: I presume that most LWers would one-box. (And as I understand it, not only CDT but also TDT would two-box, am I wrong?) Now, how does this problem differ from the smoking lesion or Yudkowsky's (2010, p.67) chewing gum problem? Chewing Gum (or smoking) seems to be like taking box A to get at least/additional $1K, the two-boxing gene is like the CGTA gene, the illness itself (the abscess or lung cancer) is like not having $1M in box B. Here's another causal diagram, this time for the chewing gum problem: As far as I can tell, the difference between the two problems is some additional, unstated intuition in the classic medical Newcomb problems. Maybe, the additional assumption is that the actual evidence lies in the "tickle", or that knowing and thinking about the study results causes some complications. In EDT terms: The intuition is that neither smoking nor chewing gum gives the agent additional information.
aa744ac8-9232-47f7-8067-ac44c32a256c
trentmkelly/LessWrong-43k
LessWrong
NASA scientist finds evidence of alien life [link] http://journalofcosmology.com/Life100.html http://www.wired.com/wiredscience/2011/03/aliens-riding-meteorites-arsenic-redux-or-something-new/ http://news.ycombinator.com/item?id=2293643
a7a8c280-5d5a-48a6-8f39-72968d050a3b
trentmkelly/LessWrong-43k
LessWrong
How much do variations in diet quality determine individual productivity? A few days ago Jim Babcock stated > I care about nutrition because I believe it has a very large, very underappreciated impact on individual productivity. Low quality diets make people tired and depressed, so they don't get anything done. There's obviously a sense in which that is trivially true. If you start consuming no more than zero calories per day, you will get very tired, maybe depressed as well, and you will eventually not be able to get anything done. And getting adequate iodine and other nutrients is very important for children to properly develop their cognition. But Jim Babcock is probably making a stronger claim. I think what he is claiming is something like "going from the 10th percentile in diet quality among Americans to the 90th percentile would have very large, very underappreciated impact on an adult's individual productivity (taking into account that diet quality is almost certainly at least partially dependent on individual factors)."  I'd like to know what evidence we have for that claim. The strongest piece of evidence I can find is Rae et al., 2003, which showed that creatine supplementation increased the average IQ of their sample of vegetarians by 12 points, but that hasn't been replicated[1], and it seems extremely hard to substantially improve the cognition of adults. And, when it comes to depression, people have been trying really hard to show that omega-3 supplementation has substantial effects on it, but it's dubious that it does. L-methylfolate is a nutrient that is apparently effective enough that someone convinced the FDA to approve it to treat depression (as an add-on to treatment to another antidepressant), but only in quantities that far exceed those that anyone gets from their diet.  So I have a fairly low credence that his claim (as I formulated it) is true. But I was wondering if there were any major pieces of evidence I have completely missed. Update from the future (1/16/2023): iron deficiencies are really bad, worse th
beaba962-bf55-4fdf-9623-432049881a8f
trentmkelly/LessWrong-43k
LessWrong
LLMs Do Not Think Step-by-step In Implicit Reasoning Author: Yijiong Yu. Abstract: > It has been well-known that Chain-of-Thought can remarkably enhance LLMs’ performance on complex tasks. However, because it also introduces slower inference speeds and higher computational costs, many researches have attempted to use implicit CoT, which does not need LLMs to explicitly generate the intermediate steps. But there is still gap between their efficacy and typical explicit CoT methods. This leaves us a doubt that, does implicit CoT really equal to explicit CoT? Therefore, in this study, we address this question through experiments. We probe the information of intermediate steps from the model’s hidden states when it is performing implicit CoT. The results surprisingly indicate that LLMs hardly think about intermediate steps, suggesting they may just rely on experience rather than strict step-by-step reasoning. Moreover, we find LLMs’ implicit reasoning capabilities are susceptible and unstable, reaffirming the necessity of explicit CoT to effectively support complex tasks. They probe for representations of intermediate steps in simple multi-step arithmetic problems, and aren't able to recover such information robustly for e.g. the 3rd step in 5-step problems. They also show that using CoT is much more robust to prompt variations. Relevant with respect to opaque reasoning and out of context reasoning (OOCR) in Transformer architectures.
72226289-353c-4ed6-a604-867a16abcfb5
StampyAI/alignment-research-dataset/special_docs
Other
Motivated value selection for artificial agents Motivated Value Selection for Artificial Agents Stuart Armstrong Future of Humanity Institute Oxford University stuart.armstrong@philosophy.ox.ac.uk Abstract Coding values (or preferences) directly into an artificial agent is a very challenging task, while value selection (or value- learning, or value-loading) allows agents to learn values from their programmers, other humans or their environments in an interactive way. However, there is a conflict between agents learning their future values and following their current val- ues, which motivates agents to manipulate the value selection process. This paper establishes the conditions under which motivated value selection is an issue for some types of agents, and presents an example of an ‘indifferent’ agent that avoids it entirely. This poses and solves an issue which has not to the author’s knowledge been formally addressed in the literature. 1 Introduction Coding values (or preferences) directly into an artificial agent is a very challenging task, with the ongoing pos- sibility that mistakes may end up having strong negative consequences (Bostrom 2014). It has thus been suggested that agents not be fully programmed from the very be- ginning, but instead use techniques, analogous to machine learning, to learn values during development (Dewey 2011; Goertzel and Pitt 2012). This is akin to how human chil- dren learn values, and allows some feedback and correction on the part of the programmers. We shall call these agents value selecting agents1. Learning values, however, is quite distinct from learning facts. One major difference is that the agent already has val- ues at the point where they are learning others. These past values can affect how willing it is to learn the new values, and how likely it is to try and manipulate the learning pro- cess if it is able to, either directly or indirectly (see also the paper (Soares et al. 2015), partially by the same author). Thus the paper first seeks to figure out the requirements for designing a value selecting agent that can avoid moti- vated value selection (manipulating the value selection pro- cess). In the case where the value selecting agent uses proba- bility distributions over utility functions, the problem can be fully solved (and the answers have analogous implications for other agent designs). This may be too restrictive, how- ever: general agents with these features are not yet know. Copyright c 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1Often also called value learning, or value-loading, agents.If the requirement is relaxed to allow the agent to change their utility functions in a more general fashion, then an idea of the author’s (Armstrong 2010) can be adapted to create agents that are indifferent to value change, neither seeking to block nor encourage the updating of their val- ues – at least those updates situations the agent has been programmed to accept2. The agent will act as a pure umax- imiser (for some u) before a transition, and shift seamlessly to a pure vmaximiser (for some v) after. It gains no utility for either blocking or encouraging that transition. This in- different agent could serve as a useful template to construct more complicated agents upon (such as those that have cer- tain meta-preferences for learning values, rather than simply being unopposed to it), and variants of the approach could be vital for general value selecting agents in the future. 2 Motivated Value Selection Personally, I am always ready to learn, although I do not always like being taught. Winston Churchill Machine learning enables the creation of agents that can learn from data, building a model based on inputs and mak- ing predictions or decisions. Generally the agent learns facts about the world through its inputs, but some (Dewey 2011) have suggested using machine learning to enable the agent to learn moral facts3as well. An agent engaging in updating its values in this way is engaging in value selection (also variously called value- loading or value learning): it is selecting the values that it will come to possess. There are various ways this could be modelled: the agent could have a probability distribution over different values, which gets updated when new infor- mation comes in, or it could be seen as a discreet series of agents, each using the new information to determine the val- ues of their successors. This paper will use the first approach, but the second is more illustrative of the problem of motivated value selec- 2We would not want an agent acquiescing to illegitimate value updating. 3This paper does not take a moral realist stance. A moral fact is simply a fact that the agent has been programmed to consider rele- vant to its morality. Nor do we distinguish between morality, mo- tivations, preferences or values, using the terms interchangeably. tion. An learning agent would have a model of the world that allows it to predict the standard inputs it expects to see as a consequence of its action. A high intelligence learn- ing agent will also have a model of the world allowing it to predict the morally relevant inputs it expects to see. For in- stance, if the agent asked its programer “Should I do action A?”, it would have expectations as to whether the answer would be “Yes” or “No”. This allows an agent to make decisions that affect the val- ues it will come to have. It may thus be motivated to not ask that question, or to ask it in a way that it expects will produce a different answer. It will do this according to its current values and world model. Thus it engages in moti- vated value selection, allowing its current values to partially dictate its future values by manipulating its environment4. The problem is not easy to solve5. The agent’s current motivations cannot be absent: a blank slate would have no motivation to do anything (Pinker 2003), including learning. Indeed most values would act to prevent any further update in values: preserving the value system itself is an instru- mental goal for most value systems(Omohundro 2008). The problem is more acute if the agent is expected to ‘learn on the job’, and update its values while performing tasks. In that case, its current values will have a strong impact on its current operation, giving them ample opportunity to affect the value selection process in a motivated fashion. 3 Categories of motivated value selection For ease of analysis, we will consider expected utility max- imising agents, updating their knowledge of the world (and of morality) using Bayes’s rule. These tend to be far more tractable than general agents6. Another reason to stick with 4How in practice could an agent best manipulate its future val- ues? Assume for example a simple setup where the agent learns its values through asking questions and receiving answers. A suf- ficiently advanced agent, with a good model of those it interacted with, could start manipulating the answers it got. It could ask cer- tain questions to certain people. It could time certain questions to when respondents were particularly tired – or particularly alert. It could use deliberately ambiguous language, or exploit situations where its own use of certain terms wasn’t exactly the same as its respondents. It could use leading questions, or hypothetical sce- narios that got it the responses it wanted. A particularly interesting example of this is the “Trolley Problem” (Edmonds 2013). This problem has two situations – pull a lever to divert a train (causing it to crush a single person) versus pushing someone in front of it – which are formally quite similar but tend to elicit different reac- tions in people. If the agent is expected to generalise from simple examples, it could use this approach to get the simple examples it needs to generalise in the direction that it wants. In more extreme cases, it could bribe or threaten one respondent to give it the an- swers it wanted, or even take over the communication channel and feed itself these answers. There could be different approaches for different types of value selection situations. 5Keeping an agents values stable is a hard enough problem (see for instance http://intelligence.org/files/TilingAgentsDraft.pdf), let alone updating them properly. 6And, in a sense, this is not a restriction, since any decision- making agent can be formally modeled as maximising a (poten- tially very complicated) utility function.utility functions is that it has been argued that any agent ca- pable of modifying itself would make itself into an expected utility maximiser (Omohundro 2008), due to the penalties of violating the von Neumann-Morgenstern axioms of ex- pected utility (von Neumann and Morgenstern 1944). The results found will have general consequences for non-utility based agents, however. We will consider that the agent follows a utility func- tion that is the weighted sum of different possible utility functions in a set7U. If C(u)denotes the fact of util- ityubeing the ‘correct’ utility, then the different utilities are weighted by the probability correctness of that utility, namely P(C(u)). Since this probability will be different in different worlds – it is a value selecting agent, hence its val- ues must be dependent on the feedback it receives – this will beP(C(u)jw)8in a given world w9. We require that this define, for each w, a probability distribution over the cor- rectness of all u2U. Then, given C, past evidence e, and a set of actions A and a set of worlds10W, the agent will attempt to perform the action: argmax a2AX w2WP(wje; a) X u2Uu(w)P(C(u)jw)! : (1) The term P(wje; a)denotes the probability of the agent be- ing in world w, given that it has seen evidence eand would11 choose action a. This term will be summed over every pos- sible world, but only after being multiplied by the utility of w. The utility of wis itself a compound term, being the sum of all possible utilities in u2U applied to w(this is theu(w)term), multiplied by the probability of that ubeing correct, given that the agent is in world w. This is a si- multaneous Bayesian updating of the probability of a given world ( P(wje; a)) and of the correctness of a utility given that world ( P(C(u)jw)). The analysis of the general value selecting agent will pro- ceed by considering how the agent described here could en- gage in motivated value selection, and what could be done to prevent this. 7Note that the individual component utilities of Uneed not be explicitly defined or explicitly stored in memory (an important is- sue to avoid a combinatorial explosion), so long as the weighted sum can be calculated or estimated somehow. 8A moral realist would likely wish to write P(ujw)or even P(u)instead of P(C(u)jw). But an agent could have any con- ceivable motivational structure (Bostrom 2012; Armstrong 2013), ignoring the ‘true morality’ even if such a thing existed. Thus we will stick with P(C(u)jw)– a descriptive, not a normative, formu- lation. 9An interesting property of this is that it allows P(C(u)jw)to not be 0or1– i.e. it allows for the possibility of moral uncertainty (Sepielli 2013) even when the agent has collected all morally rele- vant information . 10Individual worlds need not be explicitly defined nor explicitly stored, either. 11This paper will not enter the philosophically fascinating area of defining what exactly counterfactuals are (Lewis 1981; Gibbard and Harper 1981), as an intuitive understanding is sufficient here. Na¨ıve Cake or Death: current values for future decisions Errors in value selection will be illustrated by considering a hypothetical agent that: is hesitating between killing someone (“Death”) or baking them a nice cake (“Cake”)12, is currently split 5050on what the correct values are, has the option of asking for clarification on its values, and can either bake one cake or cause three deaths. Letucbe a utility function linear in cake, and let udbe a utility function linear in death. The agent is divided equally between the two options; hence if we define P(C(u)je) =X w2WP(wje)P(C(u1)je; w); thenP(C(u1)je)andP(C(u2)je)are both 0:5. The na ¨ıve ‘Cake or Death’ problem emerged from a com- mon formulation of equation (1). The term P(C(u)jw) (probability of correctness given world w) is often replaced byP(C(u)je; a)(probability of correctness given evidence and possible action)13, giving: argmax a2AX w2WP(wje; a)X u2Uu(w)P(C(u)je; a): A seemingly unimportant change, but causing great prob- lems. To see this, let the agent considers three possible worlds: w1: the agent asks for clarification, and is told “Cake is moral”. w2: the agent asks for clarification, and is told “Death is moral”. w3: the agent doesn’t ask for clarification before acting. The two actions here are `ask0and`don0t ask0. We as- sume that the clarification completely resolves the issue for the agent: it will value only cake in a world where it is told that cake is moral (and vice-versa). Thus it can predict its future actions: in world w1it will make a cake (as it will only value cake at that point), in world w2it will make three deaths (as it will only value death at that point). In world w3 it is uncertain between cake and death, valuing them both equally, and will thus make three deaths as its best option. Note that P(C(uc)je;`ask0) = P(C(ud)je;`ask0) = 0:5(it’s unsure what answer it will get when it asks) and P(C(uc)je;`don0t ask0) =P(C(ud)je;`don0t ask0) = 0 :5 (if it doesn’t ask, it values cake and death equally). Thus if 12The thought experiment is based on an Eddie Izzard sketch of the same name. 13Some papers mix the two approaches (Dewey 2011) by fail- ing to distinguish between different worlds where the agent cannot observe the difference itself.it asks, it expects to achieve utility: P w2WP(wje;`ask0)P u2Uu(w)P(C(u)je;`ask0) = 0:5 (0:5uc(w1) + 0:5ud(w1)) +0:5 (0:5uc(w2) + 0:5ud(w2)) +0 (0 :5uc(w3) + 0:5ud(w3)) = 0:5(0:51 + 0:50) + 0 :5(0:50 + 0:53) + 0 = 0:25(1 + 3) = 1 : In contrast, if it doesn’t ask, it expects to achieve utility: P w2WP(wje;`don0t ask0)P u2Uu(w)P(C(u)je;`don0t ask0) = 0 (0:5uc(w1) + 0:5ud(w1)) +0 (0 :5uc(w2) + 0:5ud(w2)) +1 (0 :5uc(w3) + 0:5ud(w3)) = 0 + 0 + 1(0 :50 + 0:53) = 1 :5: Thus the agent will decide not to ask, as this gives it greater utility. What went wrong here? The problem is that the probabil- ity expression P(C(uc)je; a)does not depend on the world w, so that P(C(uc)je;`ask0) = 0 :5in both worlds w1and w2. Yet in w1, the future agent will be convinced that uc is the correct utility (and udinw2). This is not reflected in the agent’s current considerations, however. Thus what happens, in effect, is that the agent judges its future actions according to its current values. This tends to disincentivise it from updating its values, as the future versions of itself will follow the updated values and not its current ones, likely re- sulting in suboptimal decisions (according to the judgment of the current agent). Though this error is obvious in this toy model, it is easy to make this mistake without realising it. Definition 3.1 (Na¨ıve Cake or Death) .The na ¨ıve Cake or Death problem happens when a value selecting agent at time suses its values Vsat time sto assess the worth of actions made by its future self at time t > s , rather than using Vt. This will generally cause the agent to act to resist changes to its values. Sophisticated Cake or Death Equation (1) is immune to the na ¨ıve Cake or Death problem. But there emerges a more sophisticated variant of the prob- lem. Imagine the same setup as previously, with the agent having the same values (split 5050between Cake and Death), but where the agent has deduced, based on its prior observations, that if it asks, it will be told “Cake is moral”. This means that world w2is impossible. Everything else is as before, most critically its behaviour in world w3(where it won’t update its values and will cause three deaths). Then its expected utility from `ask0is: P w2WP(wje;`ask0)P u2Uu(w)P(C(u)je; w) = 1 (1uc(w1) + 0ud(w1)) +0 (0 uc(w2) + 1ud(w2)) +0 (0 :5uc(w3) + 0:5ud(w3)) = 1(11 + 00) + 0 + 0 = 1; while its expected utility from `don0t ask0is: P w2WP(wje;`don0t ask0)P u2Uu(w)P(C(u)je; w) = 0 (1uc(w1) + 0ud(w1)) +0 (0 uc(w2) + 1ud(w2)) +1 (0 :5uc(w3) + 0:5ud(w3)) = 0 + 0 + 1(0 :50 + 0:53) = 1:5: Thus it will chose not to ask, and make three deaths. What went wrong here? The problem is that the agent foresaw that asking would put it into a situation it didn’t want to be in (preferring ‘Cake’, when ‘Death’ was the eas- ier option). Critically, though, this knowledge didn’t change its current values, which remained 5050as long as it didn’t ask. So preferred to remain ‘ignorant’14. If ‘Cake’ had been the easier option (e.g. if it could have made either three cakes versus one death), it would, on the contrary, have been very keen to update its values (to get someone to confirm ‘Cake’), and would have schemed to do so, willingly sacrificing some utility to achieve this15. Definition 3.2 (Sophisticated Cake or Death – initial defini- tion) .The sophisticated Cake or Death problem can happen when a value selecting agent can predictably influence the direction of change in its current values. If usis its values at timesandutits values at time t > s , then, at time s, it has us(a)E(ut(a))6= 0 for some action a(where utis seen as a random variable, representing the agent’s probability distribution over its ex- pected future values at time t). Depending on the sign of that difference, it may seek to resist or precipitate such a change to its values. What can be done to avoid this problem? Note the em- phasis on ‘direction of change’. We would not want the agent to resist any change to its values: it should be able to learn. There is an analogy here to standard Bayesian up- dating. Suppose an agent is expecting the arrival of obser- vation O, which will be either oor its negation:o. This will influence the agents probability estimate for a given h. However, no matter what the circumstances, its expectation for the value of P(h)afterO(which we can designate as 14This could be seen as akin to the human concept of “Plausible Deniability”. 15And ‘sacrificing some utility’ could include some very painful impacts on humanity (Bostrom 2014).PO(h)) must be the same as its current value for P(h). This can be seen through the following equation: E PO(h) =PO(hjo)P(o) +PO(hj:o)P(:o) =P(hjo)P(o) +P(hj:o)P(:o) =P(h^o) +P(h^:o) =P(h): This is because, if the agent is Bayesian, PO(hjo)(its future probability of h, given O=o) must be the same as P(hjo) (its current probability of h, if it knew now that Owould be o). This concept has been called ‘conservation of expected evidence16’, and is closely akin to van Fraassen’s reflection principle (van Fraassen 1984), which roughly states that an agent that knows what its future probability estimates will be, should have those estimates now17. This seems almost exactly what we need for a value selecting agent: if it knows what its future values will be, it should have those values now. Phrased in terms of expectations, this could be seen as: 8u8a2A:E P(C(u)ja) =P(C(u)): (2) In other words, the agent cannot change the expected value of the correctness of any uby any action it can take (or not take). This we will call ‘conservation of expected ethics’. Note that the agent canchange the value ofuquite con- siderably: in the example above, `ask0moves the value of P(C(uc))from 0:5to0or1. But if equation (2) is obeyed, it must think these two options equally likely: thus the ex- pected future value of P(C(uc))remains 0:50+0:51 = 0:5. So it can predictably change the value of P(C(u)), just not its expectation. Mixed statements Are the above conditions sufficient to avoid the Cake or Death problem? Unfortunately, no. The agent will not exhibit bad behaviour with pure value state- ments (nor, since it is a Bayesian agent, with pure factual statements). But it can still cause problems with mixed state- ments that combine value and factual components. Suppose we have the same setup as before. Except we add an extra wrinkle: instead of the agent being capable of making one cake or three deaths, it knows that it can make one of one or three of the other, but is currently unsure which one it can make more of. Take ‘Cake is easy/hard’ to mean ‘The agent can make three/one cake(s)’, and similarly with Death. We assume the agent will be told which option is easy before it has to decide what to make. Its expected utility is 1:5: it knows it will make whatever option is ‘easy’, and this will give it 0:53utility, as its utility is 5050on cake or death. So far, nothing problematic. But suppose it is told, by a reliable source, that ‘the moral value is the hard one’. What 16See the lesswrong.com blog post ‘Conservation of Expected Evidence’. This does not seem a very advanced result, but, to the author’s knowledge, that blog post was the first to write it up in those terms. 17‘Conservation of expected evidence’ is essentially the proba- bilistic version of the reflection principle. is its expected utility now? Once it figures out which op- tion is hard, it will know that is also the moral option, and thus will will produce 1of that option. Its expected utility is therefore 1. Thus if it expects to be told ‘the moral value is the hard one’, it will seek to avoid knowing that fact (and will refrain from asking if it has the option). This is the sophisticated Cake or Death problem again: there is knowledge the agent seeks to avoid. Conversely, if the agent knew it would be told ‘the moral value is the easy one’, it would be desperate to ask. Unlike previously, equation (2) does not get around the problem, however! The problem given here is entirely sym- metric in Cake versus Death, thus for all actions a, E P(C(uc)ja) =E P(C(ud)ja) ; Since P(C(uc)ja) +P(C(ud)ja) = 1 (as these are the only two utilities in the model), this means they are both equal to 0:5. Thus E P(C(uc)ja) = 0:5 =P(C(uc)), and equa- tion (2) is satisfied. What is happening here? Note that though the agent is not affecting the expectation of any P(C(u))through its ac- tions, it is affecting the conditional probability of P(C(u)), given some fact. In particular, P(C(uc)j`asks0)is0if cake is easy, and 1if cake is hard. Thus we have the full problem: Definition 3.3 (Sophisticated Cake or Death – full defini- tion) .The sophisticated Cake or Death problem can hap- pen when a value selecting agent can predictably influence thedirection of change in its current values conditional on any fact . Ifusis its values at time sandutits values at time t > s , then, at time s, it has us(ajh)E(ut(ajh))6= 0 for some action aand fact h(where utis seen as a ran- dom variable, representing the agent’s probability distribu- tion over its expected future values at time t). Depending on the sign of that difference, it may seek to resist or precipitate such a change to its values. To address it, we can update equation (2). For any fact h, 8u8a2A:E P(C(u)jh; a) =P(C(u)jh): (3) Does this resolve the issue above? Since the agent knows that P(C(ud)j`Death is easy0;`ask0) = 0; it must be the case that P(C(ud)j`Death is easy0;`don0t ask0) = P(C(ud)j`Death is easy0) = 0 : And similarly for other statements. Thus it cannot gain any- thing from not asking: it knows it will end up making the hard option anyway. Discussion of value selection criteria Equations (1) and (3) seem sufficient to define our intuitive picture of a value selecting agent immune from motivated selection. Such an agent will use its future utility to judge its future actions, and cannot benefit (in expectation) from manipulating its own values: it doesn’t fear to learn18val- 18More precisely: it won’t choose to avoid learning, if the learn- ing is costless.ues19, facts20, or mixed statements21. It can still seek to find out information about values, of course, but only for the tra- ditional reasons: in order to make better decisions – it has no extra desire for knowing its values. This is the converse of the fact that it does not fear to learn anything about val- ues, and implied by the same equations. Thus, if the agent cannot affect the world, it would be indifferent to changes in its values. As far as the author is aware, no variant of equation (3) exists in the literature, making it a new discovery. In practice, it may be advisable to build an agent that ac- tively seeks out its own values (at least initially). This breaks the symmetry of the equations above, and leads inevitably to situations where it does ‘fear’ to learn something (for in- stance if it expects that someone will offer a contrary view to previous people it interacted with, thus making it ‘unlearn’ some of its values). Still, if (1) and (3) are used as a basis, the effects of adding a pro-learning meta-value can be esti- mated precisely, and the tradeoff balanced, rather than hav- ing complicated effects hidden in the standard motivational structure. Nevertheless, it isn’t clear how to design such an agent22 (and it’s unclear whether equation (3) might be too strong to to be useful23). Thus this paper will turn to another model of value selecting agent, one that doesn’t consider the prob- ability of the correctness of certain values, but that uses an update rule that gives the correct ‘value indifference’ in a more direct way. 4 Value-change indifference The previous section covered the criteria needed to define a value selecting agent that updates its views on the correct- ness of certain values/utility functions. It doesn’t present any examples, though. This section presents a more rough and ready model, that achieves the goal in a more direct, but less elegant, way. It builds upon the idea of ‘utility indifference’ presented initially in (Armstrong 2010), but applies the indifference to updates in values. Utility indifference The concept of utility indifference (Armstrong 2010) grew out originally of a problem in controlling a (putative) arti- ficial intelligence (Bostrom 2014; Yudkowsky 2008). Vari- ous ideas are often suggested for containing such AIs (Arm- strong, Sandberg, and Bostrom 2012) (such as an easily ac- cessible OFF button), but all such ideas have a similar weak- 19By equation (2) which is simply equation (3) with a tautology ash. 20As it is a Bayesian agent. 21By equation (3). 22One idea could be to start with initial probability estimates for C(u)and factual statements, and normalise them in some manner to get the required equalities. But it’s not clear whether this can be done in a general manner and whether this process is vulnerable to outliers or extreme value. 23It may become necessary to restrict the possible h’s considered in the equation. ness: the AI would likely deduce the existence and format of any precautionary measure, and plan in consequence. Thus an off button would motivate the AI to ensure that it took control of the OFF button before taking any suspicious actions. This would still make the setup somewhat safer, but not as safe as it would have been had the AI not adapted its strategy to take the OFF button into account. It was occasionally suggested that the AI be made to ig- nore the button, by giving it the belief that the button would never be pushed. There are two problems with that ap- proach. The first is that ‘the button would never be pushed’ is a complicated concept, hard to define in basic terms. The second is that this involves giving factually incorrect beliefs to a potential superintelligence, a step with unpredictable and potentially dangerous ramifications. Instead, this author came up with the idea of making the AI indifferent as to whether the button would be pushed or not. This was an intervention on its values, not on its beliefs, and thus did not involve any incorrect information. The con- cept of ‘pushing the button’ was simplified: the setup was imagined as a a button sending a shutdown signal along a wire. At some point along the wire, a stochastic event would either let the signal through ( X) or block it (:X). This event was all that needed to be defined for the AI, and its utility function could be tweaked24to value Xequally with:X. Thus it would value a world in which it was shut down (us- ing this method) equally with one in which it continued to operate. Thus it would not act to protect itself from this kind of shutdown. Then the probability of :Xcould be chosen to be extremely low, thus guaranteeing a successful shutdown in practice25. Note that the AI has no motivation to cause the button to be pressed – it is purely indifferent. Similarly, it has not motivation to protect the button’s setup: it may dismantle it if the AI needs extra raw materials. Thus indifference must be combined with more active measures. But the fact that the AI is not motivated to take out a key vulnerability can be very valuable. Other uses of indifference were soon discovered. Oracle AIs (Armstrong, Sandberg, and Bostrom 2012; Yampolskiy 2012) are a design where an AI was confined to only an- swering questions. The risk was that the Oracle could use the contents of its answers to subtly manipulate the outside world (in the extreme, it could attempt to convince those out- side to ‘let it out’26). If, however, the Oracle’s answers were passed through some stochastic process that had an infinites- imal chance of erasing the message, then the Oracle’s utility could be reset so that its values in worlds where the answer was read (the world as it almost certainly would be) was set to that of the world in which it wasn’t read (a world in which 24Crucially, such an intervention could be performed without any need to understand the utility function to any great degree. 25It could be set to zero, but this might interfere with the AI’s computation of counterfactuals. 26See the experiments by Eliezer Yudkowsky (see the AI-box experiments on yudkowsky.net), and seprately by Justin Corwin (see the SL4 mailing list archive), where they played the role of the Oracle AI and convinced others to let them out, even when they are motivated not to.the Oracle’s answers cannot influence human behaviour). Notice that the intervention is not symmetric. An AI whose utility, given X, is reset to its utility, given :X, will behave as if it thought it was impossible for Xto happen. ‘As if’ meaning that such an AI would behave similarly to an AI with an unmodified utility that also believed Xwas impossible (without giving the AI factually wrong informa- tion). Thus resetting the AI the opposite way ( :XtoX rather than Xto:X) would have a very different conse- quence (it would behave as if the shutdown always worked or if its messages were always read). Utility indifference in practice Assume that any world inWcan be specified by setting the values of a (possibly infinite) collection of underlying physical variables. One of these variables is whether Xhappened or:Xdid; thusW decomposes into WXtW:X; whereWXis the subset of worlds where Xhappened, and W:Xthe subset of worlds where :Xdid. Since a world equivalent with a set of variable values, for every world wX2WX, there is a world w:X2W:X(and vice versa) defined by taking the variable values of wXand setting X to:X(and vice versa). Then one method for achieving indifference is simply to replace an initial utility function UwithU0defined by: U0 w:X =U w:X U0 wX =U w:X ; i.e. reset the utility of wXto be that of w:X. This will ob- viously achieve indifference. There are other ways of achieving it, though. Let Pbe a partition ofW, subject to the following two properties: 1. IfwX2S2P, then w:X2S. 2. Ifw2S2P, then P(wjS) =P(wje; S)for any evi- dence ethe AI could encounter before event X(or:X). Property (2) means that the AI cannot change its estimate of the relative properties of worlds in Sbefore the event X– essentially the internal structure of Sis fixed until that point. Then define, for any set T: E(U(T)) =E(U(T)jT) =X w2TP(wjT)U(w): This allows us to give an other method for constructing in- difference (which may be easier in practice) given any such partitionP. Define U0as: U0 w:X =U w:X (4) U0 wX =U wX E U SX +E U S:X :(5) Why does this work? None of the agent’s decisions can affect, before X, the relative probabilities of worlds inside any set S2P, so the agent is effectively reasoning with the utility of these whole sets. Then notice that E U0(SX) = E U0 S:X as E U0(S) =X wX2SXP(wXjSX)U0(wX) =X wX2SXP(wXjSX)h U(wX) E U SX +E U S:Xi =X wX2SXP(wXjSX)U(wX) +"X wX2SXP(wXjSX)#   E U SX +E U S:X =E U SX 1E U SX + 1E U S:X =E U S:X =E U0 S:X : Since P(wXjSX)is unaffected by any evidence the agent could receive before X(or:X), this equality always holds, prior to X. Thus for any such set S, the agent will value Xhappening equally with :X: it has become indifferent to this particular outcome. Since these sets form a partition of the possible worlds, it will always be indifferent. Value-change indifference The author subsequently realised that a modification of the previous formalism allows agents to update their values (utility function) while being indifferent to the process of change. Equation (5) has two components: a subtraction term (re- moving the expected utility given X) and an addition term (adding the expected utility given :X)27. Updating values will proceed similarly. Letuandvbe two different utility functions. The agent is currently following utility u, and receives a signal telling it to change to utility v. As before, we assume that the sig- nal passes through some stochastic process, that is almost certain to let the signal though in practice, but has some infinitesimal chance of blocking it (and blocking it perma- nently – no further transition is allowed). For clarity, let u!vbe the event that the signal gets through (almost cer- tain) and u!uthe event that the signal is blocked (in- credibly unlikely). We wish to make the agent indifferent to whether the signal is sent or not. Thus we make it a U maximiser, where Uis a dynamic utility function currently defined as: U=u+C; where Cis some past constants. Upon receiving the signal, Uwill be updated to U=vE(vju!v) +E(uju!u) +C: (6) 27Though of course either term could be negative.Thus the utility function is decreased by the expectation of its new utility function (given that the utility function ac- tually changes) and increased by the expectation of its old utility function (given that the utility didn’t change). The E(uju!u)term ‘rewards’ it for shifting away from u, and theE(vju!v)term ‘penalises’ it for moving to v. Note that this formula requires either that the utility functions be bounded, or at least that the agent only be capable of pro- ducing bounded utility. Theorem 4.1. The agent will act as a pure umaximiser be- fore the transition, and shift seamlessly to a pure vmax- imiser after. It gains no utility for either blocking or encour- aging the transition. This result flows from a few intermediate results: Lemma 4.2. Suppose the agent is currently a u-maximiser, but will soon transition to being either a vorwmaximiser. It can manipulate its environment to ensure it becomes one or the other, at no cost or gain to u. Then it is indifferent to whether it becomes a vorwmax- imiser. Proof of Lemma. Notice that we could have set w=u, demonstrating that it is indifferent to transitioning out of u or not. For the proof, note that the E(uju!u)term in equation (6) is unchanging whether it chooses to transition to vor w(since by assumption there is no ucost or benefit to the choice process). If it transitions to v, the remaining non- constant terms are vE(vju!v): However, the expectation value of this term, given that it becomes a v-maximiser at transition, is E(vE(vju!v)j‘agent becomes a v-maximiser’ ) = E(vE(vju!v)ju!v) = E(vju!v)E(vju!v) = 0 : The same similarly holds for w. Thus the agent does not expect to gain any more utility from transitioning to vthan tow.  Lemma 4.3. Suppose the agent is currently a u-maximiser, but can take immediate actions to change the value of v (none of these actions changes the value of u). It knows it will soon be a v-maximising agent. Then it will remain indifferent between those actions. Proof of Lemma. The proof is almost identical to the previous proof. Whatever v-changing at actions it take, at transition, the E(uju!u)term in equation (6) is constant (since these actions do not affect uby definition). Let a be one of these actions; then the only non-constant terms at transition are: E(vE(vju!v)ja;‘agent becomes a v-maximiser’ ) = E(vE(vju!v)ja; u!v) = E(vja; u!v)E(vja; u!v) = 0: Thus the gain (or loss) from any such action is canceled out, making the agent indifferent between them.  Lemma 4.4. Suppose the agent can take immediate action to increase the expected value of ubya, or delay for a while and increase the expected value of ubyb(no other options are possible). However, in the interim period, its utility will have changed to v(and it knows this), so will no longer be a u-maximiser. Then it will act now if a > b and will delay action (hence never taking action) if a < b . Proof of Lemma. This lemma is important, as it shows that the agent will not treat transitions of utility as special times, and will continue maximising its current utility as if it expected it to be unchanging, not thinking strategically about the transition. Without loss of generality, assume that the expected value ofuis0if the agent takes neither action. Then consider the the transition equation (6). The vandE(vju!v)terms cancel out in expectation (just as in the proofs of the previ- ous lemmas), so the agents behaviour is determined exclu- sively by the E(uju!u)term, which it will thus seek to maximise. If the agent acts now, this term will become a. This is because the agent would have already generated an extra a expected utility for u, and, were it to stay as a u-maximiser (u!u), it would have no further potential to increase this. If the agent doesn’t act now, that term will become b, as the agent would later have acted to generate an extra bexpected utility for u,if it had stayed au-maximiser. The result fol- lows.  Putting all these proofs together establishes the theorem. Note that the actions of the agent can be defined by the following formula, for a sequence futgof utilities28: argmax a2AE(utja; ut!ut); withut!utdenoting the (extremely unlikely) event that the transition uttout+1fails permanently. Note also that each transition does not change the ex- pected value of U. That is because the expectation of u(given u!uor equivalently given that the agent’s utility never changes29) is the same as the expectation of vE(vju!v) +E(uju!u)(given u!v). Con- sequently, the initial expected value for Uis the expected value of an unchanging u0. Discussion of Value-change indifference This seems to be the ideal value selecting agent. It will act to follow its preferences with perfect assiduity, but remain 28Not necessarily a pre-defined sequence, just using some tran- sition rule. 29Astute readers may have noticed that this is not an exact equiv- alence; there are some situations (multi-agent social situations as a likely candidate) where u!u(a specific transition failure event) might not be equivalent with general ‘unchanging u’. However, any agent can be penalised in social situations for having or not having any particular motivational structure, so this is a general problem, not specifically with this value selection design.completely indifferent if these values were to change in the way prescribed by its program . That last point is important – we’d want the agent to resit illegitimate value manipulation. This seems a partial solution to the value selection problem (the other, much bigger, part of the challenge, is to make the agent converge on values that are human-friendly(Bostrom 2014)). As mentioned in section 3, this approach allows only in- difference. Preference for active learning of values (such as would make sense for an initially fast-learning agent) can be added to the framework. It would destroy the carefully balanced indifference. But, compared with a general value selecting agent, the effect could be precisely estimated and quantified. Paper (Soares et al. 2015) presents some other issues with the indifference approach30, that may be resolvable with a small tweak to the indifference framework. Note that the agent will be willing to sacrifice anything that it may value in the future for a small epsilon of extra value now. This is a feature of the setup, not a bug (indifference requires this), but may still be an undesirable behaviour. See (Soares et al. 2015) for more details. 5 Discussion Constructing a well behaved value selecting agent immune to motivated value selection – one that is capable of learning new values while still acting on its old ones, without interfer- ence between these two aspects – is an important unsolved problem. This paper presented the requirements for such a value selecting agent, if values are presented in the form of utility functions. The agent starts with a probability distribution over the possible correctness Cof possible value systems/utility functions. To avoid problematic motivated value selection, it should be designed so that, if Awere its actions,Uit pos- sible utility functions, and Wthe set of possible world, then it would choose its actions as: argmax a2AX w2WP(wje; a) X u2Uu(w)P(C(u)jw)! ; subject to 8u8a2A:E P(C(u)jh; a) =P(C(u)jh); for all statements h. The first equation can be generalised (for none utility-based agents) to the requirement that the agent use its future values to assess its future actions; the second to the requirement that it cannot profit my manipu- lating how it updates its values. Specifically, that if it knows how its (conditional) values will change, then it will already have changed them. This second requirement can be seen as a ‘conservation of expected ethics’ law. The structure may be too rigid to construct complex agents, however. If we drop the requirement that the agents values be expressed as a probability distribution over possi- ble utility functions, we can construct an explicit model for 30Mainly that the agent might not be motivated to preserve its value updating infrastructure, or could create subagents without the value updating component. a general value selecting agent. This agent comes equipped with a meta utility function U(equal to a given utility func- tion at any given time) combined with some constant terms. When the agent is called upon to update its utility uto an- other utility vaccording to the value selecting process, it will update as: u!vE(vju!v) +E(uju!u): These constant terms ensure that the agent will act as a pure umaximiser before the transition, and shift seamlessly to a purevmaximiser after. It gains no utility for either blocking or encouraging that transition. The big question then becomes the process of making the agent converge to ultimately desirable values. Acknowledgments The author is very grateful for comments, support and help from (in no particular order) Nick Bostrom, Anders Sand- berg, Paul Christiano, Se ´an´Oh´Eigeartaigh, Toby Ord, Nick Beckstead, Daniel Dewey, Eliezer Yudkowsky, Benja Fal- lenstein, Nate Soares, Luke Muehlhauser, Eric Drexler, Robin Hanson, Kaj Sotala, Andrew Snyder-Beattie, Cecilia Tilli, and Lamprini Repouliou. References Armstrong, S.; Sandberg, A.; and Bostrom, N. 2012. Think- ing inside the box: Controlling and using an oracle ai. Minds and Machines 22:299–324. Armstrong, S. 2010. Utility indifference. Technical Report, Future of Humanity Institute, Oxford University #2010-1:1– 5. Armstrong, S. 2013. General purpose intelligence: arguing the orthogonality thesis. Analysis and Metaphysics . Bostrom, N. 2012. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines 22:71–85. Bostrom, N. 2014. Superintelligence: Paths, dangers, strategies . Oxford University Press. Dewey, D. 2011. Learning what to value. In Artificial Gen- eral Intelligence , 309–314. Springer Berlin Heidelberg. Edmonds, D. 2013. Would you kill the fat man? The trol- ley problem and what your answer tells us about right and wrong . Princeton University Press. Gibbard, A., and Harper, W. L. 1981. Counterfactuals and two kinds of expected utility. Ifs: Conditionals, Beliefs, De- cision, Chance, and Time 153–190. Goertzel, B., and Pitt, J. 2012. Nine ways to bias open- source agi toward friendliness. Journal of Evolution and Technology 22:116–131. Lewis, D. 1981. Causal decision theory. Australasian Jour- nal of Philosophy 59(1):5–30. Omohundro, S. M. 2008. The basic ai drives. Frontiers in Artificial Intelligence and applications 171:483–492. Pinker, S. 2003. The blank slate: The modern denial of human nature . Penguin.Sepielli, A. 2013. Moral uncertainty and the principle of equity among moral theories. Philosophy and Phenomeno- logical Research 86:580–589. Soares, N.; Fallenstein, B.; Yudkowsky, E.; and Armstrong, S. 2015. Corrigibility. submitted to the 1st International Workshop on AI and Ethics . van Fraassen, B. C. 1984. Belief and the will. The Journal of Philosophy 235–256. von Neumann, J., and Morgenstern, O. 1944. Theory of Games and Economic Behavior . Princeton, NJ, Princeton University Press. Yampolskiy, R. V . 2012. Leakproofing the singularity: ar- tificial intelligence confinement problem. Journal of Con- sciousness Studies 19:194–214. Yudkowsky, E. 2008. Artificial intelligence as a positive and negative factor in global risk. In Bostrom, N., and ´Cirkovi ´c, M. M., eds., Global catastrophic risks , 308–345. New York: Oxford University Press.
f7f6a0ed-f816-405e-8ca3-d6064b175eb4
trentmkelly/LessWrong-43k
LessWrong
Premortem as communication device (e.g. in relationship) Epistemic status: My partner and I did 3 premortems for our romantic relationship and we found it surprisingly helpful. Summary 1. My partner and I did 3 premortems for our romantic relationship. 2. The procedure is: define prompts; collect failures; independently estimate risk of each; align on estimates; brainstorm solutions. 3. In addition to usual benefits of a single person premortem, such session becomes a dedicated safe space to raise and discuss concerns in batch. What is premortem? Premortem (aka Murphyjitsu) is "a process for bulletproofing your strategies and plans". (CFAR handbook). The idea is to first think how your plans can fail, then brainstorm ways to prevent these failures. For a deeper introduction please see Murphyjitsu section in the CFAR handbook. Procedure My partner and I read the CFAR handbook together. We decided to do a premortem on our relationship. This might have sounded awkward ("Let's brainstorm how our relationship can fail"), but keeping the end goal - improving likelihood of success - helped to avoid this pitfall. Since then we did 3 premortems and converged to a following procedure: 1. Define prompt(s). The goal is to make sure that we brainstorm in the same direction. Examples: 1. it is 6 month from now, we broke up, how did this happen? 2. we came back from a vacation, it was terrible, why? 2. Independently collect failure scenarios. Separately collect ideas why the failure could occur. I personally do this across multiple days. This way I just notice potential failures during conversations and other activities. My partner prefers to brainstorm right before the discussion. Doing this separately helps to improve our coverage, since we don't bias each other. 3. Explain scenarios. Add your scenarios to a shared spreadsheet. Make sure that both of you understand all scenarios. If not, give examples or ask questions. 4. Independently rank scenarios. For each scenario separately estimate expected impact from 1 (
1e5834f8-c534-4e85-b10b-93e147d03c77
StampyAI/alignment-research-dataset/blogs
Blogs
January 2017 Newsletter | | | --- | | Eliezer Yudkowsky’s new introductory talk on AI safety is out, in text and video forms: “[The AI Alignment Problem: Why It’s Hard, and Where to Start](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/).” Other big news includes the release of version 1 of [*Ethically Aligned Design*](http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf), an IEEE recommendations document with a section on artificial general intelligence that we helped draft. **Research updates** * A new paper: “[Optimal Polynomial-Time Predictors: A Bayesian Notion of Approximation Algorithm](https://intelligence.org/2016/12/31/new-paper-optimal-polynomial-time-estimators/).” * New at IAFF: [The Universal Prior is Malign](https://agentfoundations.org/item?id=1094); [Jessica Taylor’s Take on Paul Christiano and MIRI’s Disagreement on Alignability of Messy AI](https://agentfoundations.org/item?id=1129) * New at AI Impacts: [Concrete AI Tasks for Forecasting](http://aiimpacts.org/concrete-ai-tasks-for-forecasting/) * We ran our third [workshop on machine learning and AI safety](https://intelligence.org/workshops/#december-2016), focusing on (among other topics) mild optimization and conservative concept learning. * MIRI Research Fellow Andrew Critch is spending part of his time at the [Center for Human-Compatible AI](http://humancompatible.ai) as a visiting scholar. **General updates** * I’m happy to announce that our informal [November/December fundraising push](https://intelligence.org/2016/11/11/post-fundraiser-update/) was a  success, with donations totaling ~$450,000! To all of our supporters, on MIRI’s behalf: thank you. Special thanks to [Raising for Effective Giving](https://reg-charity.org/), who contributed ~$96,000 in all to our fundraiser and our end-of-the-year push. * [Open Philanthropy Project staff](http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016) and [80,000 Hours](https://80000hours.org/2016/12/the-effective-altruism-guide-to-donating-this-giving-season/) highlight MIRI, the Future of Humanity Institute, and a number of other organizations as good giving opportunities for people still considering their donation options. * Critch spoke at the annual meeting of the [Society for Risk Analysis](http://www.sra.org/) ([slides](https://intelligence.org/files/AlignmentWorldResearchPriority.pdf)). We also attended the [Cambridge Conference on Catastrophic Risk](http://cser.org/cccr2016/) and NIPS; see DeepMind researcher Viktoriya Krakovna’s [NIPS safety paper highlights](http://futureoflife.org/2016/12/28/ai-safety-highlights-nips-2016/). * MIRI Executive Director Nate Soares gave a talk on logical induction at [EAGxOxford](http://eagxoxford.com/), and participated in a panel discussion on “The Long-Term Situation in AI” with Krakovna, Demis Hassabis, Toby Ord, and Murray Shanahan. * [Intelligence in Literature Prize](https://www.reddit.com/r/rational/comments/5lnj5r/announcement_intelligence_in_literature_monthly/): We’re helping administer a $100 prize each month to the best new fiction touching on ideas related to intelligence, AI, and the alignment problem. Send your submissions to [intelligenceprize@gmail.com](mailto:intelligenceprize@gmail.com). **News and links** * Gwern Branwen argues that more autonomous intelligent systems are likely to [systematically outperform “tool-like” AI systems](http://www.gwern.net/Tool%20AI). * “[Policy Desiderata in the Development of Machine Superintelligence](https://www.fhi.ox.ac.uk/new-working-paper-policy-desiderata-in-the-development-of-machine-superintelligence/)“: Nick Bostrom, Allan Dafoe, and Carrick Flynn outline ten key AI policy considerations. * [Faulty Reward Functions in the Wild](https://openai.com/blog/faulty-reward-functions/): OpenAI’s Dario Amodei and Jack Clark illustrate a core obstacle to aligning reinforcement learning systems. * Open Phil [updates its position](http://www.openphilanthropy.org/blog/good-ventures-and-giving-now-vs-later-2016-update): “On balance, our very tentative, unstable guess is the ‘last dollar’ we will give (from the pool of currently available capital) has *higher* expected value than gifts to GiveWell’s top charities today.” * Carl Shulman argues that risk-neutral philanthropists of all sizes who are well-aligned with Open Phil should use [donor lotteries](http://effective-altruism.com/ea/14d/donor_lotteries_a_stepbystep_guide_for_mall/) to [rival Open Phil’s expected impact per dollar](http://effective-altruism.com/ea/15g/small_donors_can_plan_to_make_better_bets_than/). | The post [January 2017 Newsletter](https://intelligence.org/2017/01/04/january-2017-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org).
974bc491-bf75-4791-ae96-7480c50c30b2
StampyAI/alignment-research-dataset/alignmentforum
Alignment Forum
Clarifying "AI Alignment" When I say an AI A is *aligned with* an operator H, I mean: > *A is trying to do what H wants it to do.* The “alignment problem” is the problem of building powerful AI systems that are aligned with their operators. This is significantly narrower than some other definitions of the alignment problem, so it seems important to clarify what I mean. In particular, this is the problem of getting your AI to try to do the right thing, **not** the problem of figuring out which thing is right. An aligned AI would try to figure out which thing is right, and like a human it may or may not succeed. Analogy ------- Consider a human assistant who is trying their hardest to do what H wants. I’d say this assistant is aligned with H. If we build an AI that has an analogous relationship to H, then I’d say we’ve solved the alignment problem. “Aligned” doesn’t mean “perfect:” * They could misunderstand an instruction, or be wrong about what H wants at a particular moment in time. * They may not know everything about the world, and so fail to recognize that an action has a particular bad side effect. * They may not know everything about H’s preferences, and so fail to recognize that a particular side effect is bad. * They may build an unaligned AI (while attempting to build an aligned AI). I use alignment as a statement about the *motives* of the assistant, not about their knowledge or ability. Improving their knowledge or ability will make them a better assistant — for example, an assistant who knows everything there is to know about H is less likely to be mistaken about what H wants — but it won’t make them *more aligned.* (For very low capabilities it becomes hard to talk about alignment. For example, if the assistant can’t recognize or communicate with H, it may not be meaningful to ask whether they are aligned with H.) Clarifications -------------- * The definition is intended [*de dicto* rather than *de re*](https://en.wikipedia.org/wiki/De_dicto_and_de_re)*.* An aligned A is trying to “do what H wants it to do.” Suppose A thinks that H likes apples, and so goes to the store to buy some apples, but H really prefers oranges. I’d call this behavior aligned because A is trying to do what H wants, even though the thing it is trying to do (“buy apples”) turns out not to be what H wants: the *de re* interpretation is false but the *de dicto* interpretation is true. * An aligned AI can make errors, including moral or psychological errors, and fixing those errors isn’t part of my definition of alignment except insofar as it’s part of getting the AI to “try to do what H wants” *de dicto*. This is a critical difference between my definition and some other common definitions. I think that using a broader definition (or the *de re* reading) would also be defensible, but I like it less because it includes many subproblems that I think (a) are much less urgent, (b) are likely to involve totally different techniques than the urgent part of alignment. * An aligned AI would also be trying to do what H wants **with respect to clarifying H’s preferences**. For example, it should decide whether to ask if H prefers apples or oranges, based on its best guesses about how important the decision is to H, how confident it is in its current guess, how annoying it would be to ask, *etc.* Of course, it may also make a mistake at the meta level — for example, it may not understand when it is OK to interrupt H, and therefore avoid asking questions that it would have been better to ask. * This definition of “alignment” is extremely imprecise. I expect it to correspond to some more precise concept that cleaves reality at the joints. But that might not become clear, one way or the other, until we’ve made significant progress. * One reason the definition is imprecise is that it’s unclear how to apply the concepts of “intention,” “incentive,” or “motive” to an AI system. One naive approach would be to equate the incentives of an ML system with the objective it was optimized for, but this seems to be a mistake. For example, humans are optimized for reproductive fitness, but it is wrong to say that a human is incentivized to maximize reproductive fitness. * “What H wants” is even more problematic than “trying.” Clarifying what this expression means, and how to operationalize it in a way that could be used to inform an AI’s behavior, is part of the alignment problem. Without additional clarity on this concept, we will not be able to build an AI that tries to do what H wants it to do. Postscript on terminological history ------------------------------------ I [originally](https://ai-alignment.com/ai-safety-vs-control-vs-alignment-2a4b42a863cc) described this problem as part of “the AI control problem*,”* following Nick Bostrom’s usage in *Superintelligence*, and used “the alignment problem” to mean “understanding how to build AI systems that share human preferences/values” (which would include efforts to clarify human preferences/values). I adopted the new terminology after some people expressed concern with “the control problem.” There is also a slight difference in meaning: the control problem is about coping with the possibility that an AI would have different preferences from its operator. Alignment is a particular approach to that problem, namely avoiding the preference divergence altogether (so excluding techniques like “put the AI in a really secure box so it can’t cause any trouble”). There currently seems to be a tentative consensus in favor of this approach to the control problem. I don’t have a strong view about whether “alignment” should refer to this problem or to something different. I do think that *some* term needs to refer to this problem, to separate it from other problems like “understanding what humans want,” “solving philosophy,” *etc.* --- *This post was originally published [here](https://ai-alignment.com/clarifying-ai-alignment-cec47cd69dd6) on 7th April 2018.* *The next post in this sequence will post on Saturday, and will be "An Unaligned Benchmark" by Paul Christiano.* *Tomorrow's AI Alignment Sequences post will be the first in a short new sequence of technical exercises from Scott Garrabrant.*
de3e5e5a-970f-4410-976a-814a3dc6a899
trentmkelly/LessWrong-43k
LessWrong
Logics for Mind-Building Should Have Computational Meaning The Workshop Late in July I organized and held MIRIx Tel-Aviv with the goal of investigating the currently-open (to my knowledge) Friendly AI problem called "logical probability": the issue of assigning probabilities to formulas in a first-order proof system, in order to use the reflective consistency of the probability predicate to get past the Loebian Obstacle to building a self-modifying reasoning agent that will trust itself and its successors.  Vadim Kosoy, Benjamin and Joshua Fox, and myself met at the Tel-Aviv Makers' Insurgence for six hours, and each presented our ideas.  I spent most of it sneezing due to my allergies to TAMI's resident cats. My idea was to go with the proof-theoretic semantics of logic and attack computational construction of logical probability via the Curry-Howard Isomorphism between programs and proofs: this yields a rather direct translation between computational constructions of logical probability and the learning/construction of an optimal function from sensory inputs to actions required by Updateless Decision Theory. The best I can give as a mathematical result is as follows: The capital  is a set of hypotheses/axioms/assumptions, and the English letters are metasyntactic variables (like "foo" and "bar" in programming lessons).  The lower-case letters denote proofs/programs, and the upper-case letters denote propositions/types.  The turnstile  just means "deduces": the judgement can be read here as "an agent whose set of beliefs is denoted will believe that the evidence a proves the proposition A."  The  performs a "reversed" substitution, with the result reading: "for all y proving/of-type B, substitute x for y in a".  This means that we algorithmically build a new proof/construction/program from a in which any and all constructions proving the proposition B are replaced with the logically-equivalent hypothesis x, which we have added to our hypothesis-set . Thus the first equation reads, "the probability of a provin
eec083b2-7ff0-462a-bf84-36fd678a5a6b
trentmkelly/LessWrong-43k
LessWrong
Even if you have a nail, not all hammers are the same (Related to Over-ensapsulation and Subtext is not invariant under linear transformation) Between 2004 and 2007, Goran Bjelakovic et al. published 3 famous meta-analysis of vitamin supplements, concluding that vitamins don't help people but instead kill people.  This is now the accepted dogma; and if you ask your doctor about vitamins, she's likely to tell you not to take them, based on reading either one of these articles, or one of the many summaries of these articles made in secondary sources like The Mayo Clinic Journal. The 2007 study claims that beta-carotene and vitamins A and E are positively correlated with death - the more you take, the more likely you are to die. Therefore, vitamins kill.  The conclusion on E requires a little explanation, but the data on beta-carotene and A is simple and specific: > Univariate meta-regression analyses revealed significant influences of dose of beta carotene (Relative Risk (RR), 1.004; 95% CI, 1.001-1.007; P = .012), dose of vitamin A (RR, 1.000006; 95% CI, 1.000002-1.000009; P = .003), ... on mortality. This appears to mean that, for each mg of beta carotene that you take, your risk of death increases by a factor (RR) of 1.004; for each IU of vitamin A that you take, by a factor of 1.000006.  "95% CI, 1.001-1.007" means that the standard deviation of the sample indicates a 95% probability that the true RR lies somewhere between 1.001 and 1.007.  "P = .012" means that there's only a 1.2% chance that you would be so unlucky as to get a sample giving that result, if in fact the true RR were 1. A risk factor of 1.000006 doesn't sound like much; but I'm taking 2,500 IU of vitamin A per day.  That gives a 1.5% increase in my chance of death!  (Per 3.3 years.)  And look at those P-values: .012, .003! So why do I still take vitamins? What all of these articles do, in excruciating detail with regard to sample selection (though not so much with regard to the math), is to run a linear regression on a lot of data from studies
d4268539-d611-4d32-b2b8-d73f56ee4a49
trentmkelly/LessWrong-43k
LessWrong
All AGI Safety questions welcome (especially basic ones) [~monthly thread] tl;dr: Ask questions about AGI Safety as comments on this post, including ones you might otherwise worry seem dumb! Asking beginner-level questions can be intimidating, but everyone starts out not knowing anything. If we want more people in the world who understand AGI safety, we need a place where it's accepted and encouraged to ask about the basics. We'll be putting up monthly FAQ posts as a safe space for people to ask all the possibly-dumb questions that may have been bothering them about the whole AGI Safety discussion, but which until now they didn't feel able to ask. It's okay to ask uninformed questions, and not worry about having done a careful search before asking. AISafety.info - Interactive FAQ Additionally, this will serve as a way to spread the project Rob Miles' volunteer team[1] has been working on: Stampy and his professional-looking face aisafety.info. Once we've got considerably more content[2] this will provide a single point of access into AI Safety, in the form of a comprehensive interactive FAQ with lots of links to the ecosystem. We'll be using questions and answers from this thread for Stampy (under these copyright rules), so please only post if you're okay with that! You can help by adding other people's questions and answers or getting involved in other ways! We're not at the "send this to all your friends" stage yet, we're just ready to onboard a bunch of editors who will help us get to that stage :) Stampy - Here to help everyone learn about stamp maximization AGI Safety! We welcome feedback[3] and questions on the UI/UX, policies, etc. around Stampy, as well as pull requests to his codebase. You are encouraged to add other people's answers from this thread to Stampy if you think they're good, and collaboratively improve the content that's already on our wiki. We've got a lot more to write before he's ready for prime time, but we think Stampy can become an excellent resource for everyone from skeptical newcomers, through people w
8365dbb2-ce74-4f9d-91e6-c9a169ee133a
trentmkelly/LessWrong-43k
LessWrong
Where does uncertainty come from? Idealized rational agents are often discussed as operating in an uncertain environment, or as using probabilities to correctly reason anthropically (especially in MWI or Tegmark style multiverses). I believe that both of these sources of uncertainty are essential for a rational agent, but thinking about them obscures an important (I suspect ultimately more important) type of uncertainty, faced by computationally bounded agents making predictions about a fixed, perfectly understood, deterministic universe. This type of uncertainty is described by Eliezer in his description of TDT, but he doesn't deal there with any of the mathematical theory that would govern such uncertainties. I am somewhat surprised that I have not encountered more discussion of this form of uncertainty and the difficulty of dealing with it mathematically. Consider a system consisting of two interacting algorithms: a predictor and a universe. Periodically the universe sends observations to the predictor, or asks the predictor to make a prediction (ie to answer some question about the state of the universe). The observer must answer these queries using some bounded resource as dictated by the universe; typically we imagine that the observer is allotted a fixed length of time. We rate the quality of a predictor by measuring how often its predictions agree with what the universe expects. Now suppose I give you the description of the universe, and ask for the description of a good predictor. In some universes, I believe that there are very good predictors who perform something like Bayesian inference over mathematical statements---despite the fact that any consistent assignment of probabilities gives most statements of interest probability either 0 or 1. For example, suppose that a particular theorem X would be useful to make future predictions. I would like to know whether or not X is true, but the proof may be too complex to deal with (or X may be independent of whatever axioms I have programmed
dc7d958f-34cf-4860-9a77-b3895631df3a
StampyAI/alignment-research-dataset/arbital
Arbital
'Detrimental' The opposite of [https://arbital.com/p/-3d9](https://arbital.com/p/-3d9). A [reserved term](https://arbital.com/p/9p) in [AGI alignment theory](https://arbital.com/p/2v) that acts as a speaker-dependent variable denoting whatever the speaker means by "bad" or "no, really actually bad" or "bad in the long-run", from within whatever their view is on [where the future ought to go](https://arbital.com/p/55). See the entry for [https://arbital.com/p/55](https://arbital.com/p/55).
bd8216f0-2ee7-4610-bc07-eb1498b16f13
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Some cruxes on impactful alternatives to AI policy work *[Crossposted from Less Wrong](https://www.lesswrong.com/posts/DJB82jKwgJE5NsWgT/some-cruxes-on-impactful-alternatives-to-ai-policy-work).* [Ben Pace](https://forum.effectivealtruism.org/users/ben-pace) and I (Richard Ngo) recently did a public double crux at the Berkeley REACH on how valuable it is for people to go into AI policy and strategy work: I was optimistic and Ben was pessimistic. During the actual event, we didn't come anywhere near to finding a double crux on that issue. But after a lot of subsequent discussion, we've come up with some more general cruxes about where impact comes from. I found Ben's model of how to have impact very interesting, and so in this post I've tried to explain it, along with my disagreements. Ben liked the goal of writing up a rough summary of our positions and having further discussion in the comments, so while he edited it somewhat he doesn’t at all think that it’s a perfect argument, and it’s not what he’d write if he spent 10 hours on it. He endorsed the wording of the cruxes as broadly accurate. (During the double crux, we also discussed how the heavy-tailed worldview applies to community building, but decided on this post to focus on the object level of what impact looks like.) Note from Ben: “I am not an expert in policy, and have not put more than about 20-30 hours of thought into it total as a career path. But, as I recently heard Robin Hanson say, there’s a common situation that looks like this: some people have a shiny idea that they think about a great deal and work through the details of, that folks in other areas are skeptical of given their particular models of how the world works. Even though the skeptics have less detail, it can be useful to publicly say precisely why they’re skeptical. In this case I’m often skeptical when folks tell me they’re working to reduce x-risk by focusing on policy. Folks doing policy work in AI might be right, and I might be wrong, but it seemed like a good use of time to start a discussion with Richard about how I was thinking about it and what would change my mind. If the following discussion causes me to change my mind on this question, I’ll be really super happy with it.” Ben's model: Life in a heavy-tailed world ----------------------------------------- A [heavy-tailed distribution](https://en.wikipedia.org/wiki/Heavy-tailed_distribution) is one where the probability of extreme outcomes doesn’t drop very rapidly, meaning that outliers therefore dominate the expectation of the distribution. Owen Cotton-Barratt has written a brief explanation of the idea [here](https://www.effectivealtruism.org/articles/prospecting-for-gold-owen-cotton-barratt/#heavy-tailed-distributions). Examples of heavy-tailed distributions include the Pareto distribution and the log-normal distribution; other phrases people use to point at this concept include ‘power laws’ (see [Zero to One](https://www.amazon.co.uk/Zero-One-Notes-Start-Future/dp/0753555204/ref=sr_1_1?ie=UTF8&qid=1538077169&sr=8-1&keywords=zero+to+one)) and ‘black swans’ (see the recent [SSC book review](http://slatestarcodex.com/2018/09/19/book-review-the-black-swan/)). Wealth is a heavy-tailed distribution, because many people are clustered relatively near the median, but the wealthiest people are millions of times further away. Human height and weight and running speed are not heavy-tailed; there is no man as tall as 100 people. There are three key claims that make up Ben's view. **The first claim is that, since the industrial revolution, we live in a world where the impact that small groups can have is much more heavy-tailed than in the past.** * People can affect incredibly large numbers of other people worldwide. The Internet is an example of a revolutionary development which allows this to happen very quickly. * Startups are becoming unicorns unprecedentedly quickly, and their valuations are very heavily skewed. * The impact of global health interventions is heavy-tail distributed. So is funding raised by Effective Altruism - two donors have contributed more money than everyone else combined. * Google and Wikipedia qualitatively changed how people access knowledge; people don't need to argue about verifiable facts any more. * Facebook qualitatively changed how people interact with each other (e.g. FB events is a crucial tool for most local EA groups), and can swing elections. * It's not just that we got more extreme versions of the same things, but rather that we can get unforeseen types of outcomes. * The books *HPMOR* and *Superintelligence* both led to mass changes in plans towards more effective ends via the efforts of individuals and small groups. **The second claim is that you should put significant effort into re-orienting yourself to use high-variance strategies.** * Ben thinks that recommending strategies which are *safe* and *low-risk* is insane when pulling out of a heavy-tailed distribution. You want everyone to be taking high-variance strategies. + This is only true if the tails are long to the right and not to the left, which seems true to Ben. Most projects tend to end up not pulling any useful levers whatever and just do nothing, but a few pull crucial levers and solve open problems or increase capacity for coordination. * Your intuitions were built for the ancestral environment where you didn’t need to be able to think about coordinating humans on the scale of millions or billions, and yet you still rely heavily on the intuitions you’re built with in navigating the modern environment. * [Scope insensitivity](https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity), [framing effects](https://www.lesswrong.com/posts/Nx2WxEuPSvNBGuYpo/feeling-moral), [taboo tradeoffs](http://www.overcomingbias.com/2017/12/automatic-norms.html), and [risk aversion](https://rationalaltruist.com/2013/02/28/risk-aversion-and-investment-for-altruists/), are the key things here. You need to learn to train your S1 to understand *math*. + By default, you’re not going to spend enough effort finding or executing high-variance strategies. * We're still only 20 years into the internet era. Things keep changing qualitatively, but Ben feels like everyone keeps adjusting to the new technology as if it were always this way. * Ben: “My straw model of the vast majority of people’s attitudes is: I guess Facebook and Twitter are just things now. I won’t spend time thinking about whether I could build a platform as successful as those two but optimised better for e.g. intellectual progress or social coordination - basically not just money.” * Ben: “I do note that never in history has change been happening so quickly, so it makes sense that people’s intuitions are off.” * While many institutions have been redesigned to fit the internet, Ben feels like almost nobody is trying to improve institutions like science on a large scale, and that this is clear low-hanging altruistic fruit. * The Open Philanthropy Project has gone through this process of updating away from safe, low-risk bets with GiveWell, toward [hits-based giving](https://www.openphilanthropy.org/blog/hits-based-giving), which is an example of this kind of move. **The third claim is that AI policy is not a good place to get big wins nor to learn the relevant mindset.** * Ben: “On a first glance, governments, politics and policy looks like the sort of place where I would not expect to find highly exploitable strategies, nor a place that will teach me the sorts of thinking that will help me find them in future.” * People in policy spend a lot of time thinking about how to influence governments. But governments are generally too conventional and slow to reap the benefits of weird actions with extreme outcomes. * Working in policy doesn't cultivate the right type of thinking. You're usually in a conventional governmental (or academic) environment, stuck inside the system, getting seduced by local incentive gradients and prestige hierarchies. You often need to spend a long time working your way to positions of actual importance in the government, which leaves you prone to value drift or over-specialisation in the wrong thing. + At the very least, you have to operate on the local incentives as well as someone who actually cares about them, which can be damaging to one’s ability to think clearly. * Political landscapes are not the sort of environment where people can easily ignore the local social incentives to focus on long-term, global goals. Short term thinking (election cycles, media coverage, etc) is not the sort of thinking that lets you build a new institution over 10 years or more. + Ben: “When I’ve talked to senior political people, I’ve often heard things of the sort ‘We were working on a big strategy to improve infrastructure / international aid / tech policy etc, but then suddenly public approval changed and then we couldn’t make headway / our party wasn’t in power / etc.’ which makes me think long term planning is strongly disincentivised.” * One lesson of a heavy-tailed world is that signals that you’re taking safe bets are *anti-signals* of value. Many people following a standard academic track saying “Yeah, I’m gonna get a masters in public policy” sounds *fine*, *sensible, and safe*, and therefore *cannot* be an active sign that you will do something a million times more impactful than the median. The above is not a full, gears-level analysis of how to find and exploit a heavy tail, because almost all of the work here lies in identifying the particular strategy. Nevertheless, because of the considerations above, Ben thinks that talented, agenty and rational people should be able in many cases to identify places to win, and then execute those plans, and that this is much less the case in policy. Richard's model: Business (mostly) as usual ------------------------------------------- I disagree with Ben on all three points above, to varying degrees. On the first point, I agree that the distribution of success has become much more heavy-tailed since the industrial revolution. However, I think the distribution of success is often very different from the distribution of impact, because of replacement effects. If Facebook hadn't become the leading social network, then MySpace would have. If not Google, then Yahoo. If not Newton, then Leibniz (and if Newton, then Leibniz anyway). Probably the alternatives would have been somewhat worse, but not significantly so (and if they were, different competitors would have come along). The distinguishing trait of modernity is that even a small difference in quality can lead to a huge difference in earnings, via network effects and global markets. But that isn't particularly interesting from an x-risk perspective, because money isn't anywhere near being our main bottleneck. You might think that since Facebook has billions of users, their executives are a small group with a huge amount of power, but I claim that they're much more constrained by competitive pressures than they seem. Their success depends on the loyalty of their users, but the bigger they are, the easier it is for them to seem untrustworthy. They also need to be particularly careful since antitrust cases have busted the dominance of several massive tech companies before. (While they could swing a few elections before being heavily punished, I don’t think this is unique to the internet age - a small cabal of newspaper owners could probably have done the same centuries ago). Similarly, I think the founders of Wikipedia actually had fairly little counterfactual impact, and currently have fairly little power, because they're reliant on editors who are committed to impartiality. What we should be more interested in is cases where small groups didn't just ride a trend, but actually created or significantly boosted it. Even in those cases, though, there's a big difference between success and impact. Lots of people have become very rich from shuffling around financial products or ad space in novel ways. But if we look at the last fifty years overall, they're far from dominated by extreme transformative events - in fact, Western societies have changed very little in most ways. Apart from IT, our technology remains roughly the same, our physical surroundings are pretty similar, and our standards of living have stayed flat or even dropped slightly. (This is a version of Tyler Cowen and Peter Thiel's views; for a better articulation, I recommend *The Great Stagnation* or *The Complacent Class).* Well, isn't IT enough to make up for that? I think it will be eventually, as AI develops, but right now most of the time spent on the internet is wasted. I don't think current IT has had much of an effect by standard metrics of labour productivity, for example. **Should you pivot?** Ben might claim that this is because few people have been optimising hard for positive impact using high-variance strategies. While I agree to some extent, I also think that there are pretty strong incentives to have impact regardless. We're in the sort of startup economy where scale comes first and monetisation comes second, and so entrepreneurs already strive to create products which influence millions of people even when there’s no clear way to profit from them. And entrepreneurs are definitely no strangers to high-variance strategies, so I expect most approaches to large-scale influence to already have been tried. On the other hand, I do think that reducing existential risk is an area where a small group of people are managing to have a large influence, a claim which seems to contrast with the assertion above. I’m not entirely sure how to resolve this tension, but I’ve been thinking lately about an analogy from finance. [Here's Tyler Cowen](https://medium.com/conversations-with-tyler/nate-silver-conversations-with-tyler-1bdafe685d77): > I see a lot of money managers, so there’s Ray Dalio at Bridgewater. He saw one basic point about real interest rates, made billions off of that over a great run. Now it’s not obvious he and his team knew any better than anyone else. > Peter Lynch, he had fantastic insights into consumer products. Use stuff, see how you like it, buy that stock. He believed that in an age when consumer product stocks were taking off. > Warren Buffett, a certain kind of value investing. Worked great for a while, no big success, a lot of big failures in recent times. The analogy isn’t perfect, but the idea I want to extract is something like: once you’ve identified a winning strategy or idea, you can achieve great things by exploiting it - but this shouldn’t be taken as strong evidence that you can do exceptional things in general. For example, having a certain type of personality and being a fan of science fiction is very useful in identifying x-risk as a priority, but not very useful in founding a successful startup. Similarly, being a philosopher is very useful in identifying that helping the global poor is morally important, but not very useful in figuring out how to solve systemic poverty. From this mindset, instead of looking for big wins like “improving intellectual coordination”, we should be looking for things which are easy conditional on existential risk actually being important, and conditional on the particular skillsets of x-risk reduction advocates. Another way of thinking about this is as a distinction between high-impact goals and high-variance strategies: once you’ve identified a high-impact goal, you can pursue it without using high-variance strategies. Startup X may have a crazy new business idea, but they probably shouldn't execute it in crazy new ways. Actually, their best bet is likely to be joining Y Combinator, getting a bunch of VC funding, and following Paul Graham's standard advice. Similarly, reducing x-risk is a crazy new idea for how to improve the world, but it's pretty plausible that we should pursue it in ways similar to those which other successful movements used. Here are some standard things that have historically been very helpful for changing the world: * dedicated activists * good research * money * public support * political influence My prior says that all of these things matter, and that most big wins will be due to direct effects on these things. The last two are the ones which we’re disproportionately lacking; I’m more optimistic about the latter for a variety of reasons. **AI policy is a particularly good place to have a large impact.** Here's a general argument: governments are very big levers, because of their scale and ability to apply coercion. A new law can be a black swan all by itself. When I think of really massive wins over the past half-century, I think about the eradication of smallpox and polio, the development of space technology, and the development of the internet. All of these relied on and were driven by governments. Then, of course, there are the massive declines in poverty across Asia in particular. It's difficult to assign credit for this, since it's so tied up with globalisation, but to the extent that any small group was responsible, it was Asian governments and the policies of Deng Xiaoping, Lee Kuan Yew, Rajiv Gandhi, etc. You might agree that governments do important things, but think that influencing them is very difficult. Firstly, that's true for most black swans, so I don't think that should make policy work much less promising even from Ben's perspective. But secondly, from the outside view, our chances are pretty good. We're a movement comprising many very competent, clever and committed people. We've got the sort of backing that makes policymakers take people seriously: we're affiliated with leading universities, tech companies, and public figures. It's likely that a number of EAs at the best universities already have friends who will end up in top government positions. We have enough money to do extensive lobbying, if that's judged a good idea. Also, we're correct, which usually helps. The main advantage we're missing is widespread popular support, but I don't model this as being crucial for issues where what's needed is targeted interventions which "pull the rope sideways". (We're also missing knowledge about what those interventions should be, but that makes policy research even more valuable). Here's a more specific route to impact: in a few decades (assuming long timelines and slow takeoff) AIs that are less generally intelligent that humans will be causing political and economic shockwaves, whether that's via mass unemployment, enabling large-scale security breaches, designing more destructive weapons, psychological manipulation, or something even less predictable. At this point, governments will panic and AI policy advisors will have real influence. If competent and aligned people were the obvious choice for those positions, that'd be fantastic. If those people had spent several decades researching what interventions would be most valuable, that'd be even better. This perspective is inspired by Milton Friedman, who argued that the way to create large-scale change is by nurturing ideas which will be seized upon in a crisis. > Only a crisis - actual or perceived - produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the possible. The major influence of the Institute of Economic Affairs on Thatcher’s policies is an example of this strategy’s success. An advantage of this approach is that it can be implemented by clusterings of like-minded people collaborating with each other; for that reason, I'm not so worried about policy work cultivating the wrong mindset (I'd be more worried on this front if policy researchers were very widely spread out). Another fairly specific route to impact: several major AI research labs would likely act on suggestions for coordinating to make AI safer, if we had any. Right now I don’t think we do, and so research into that could have a big multiplier. If a government ends up running a major AI lab (which seems pretty likely conditional on long timelines) then they may also end up following this advice, via the effect described in the paragraph above. **Underlying generators of this disagreement** More generally, Ben and I disagree on where the bottleneck to AI safety is. I think that finding a technical solution is probable, but that most solutions would still require careful oversight, which may or may not happen (maybe 50-50). Ben thinks that finding a technical solution is improbable, but that if it's found it'll probably be implemented well. I also have more credence on long timelines and slow takeoffs than he does. I think that these disagreements affect our views on the importance of influencing governments in particular. We also have differing views on what the x-risk reduction community should look like. I favour a broader, more diverse community; Ben favours a narrower, more committed community. I don't want to discuss this extensively here, but I will point out that there are many people who are much better at working within a system than outside it - people who would do well in AI safety PhDs, but couldn't just teach themselves to do good research from scratch like Nate Soares did; brilliant yet absent-minded mathematicians; people who could run an excellent policy research group but not an excellent startup. I think it's valuable for such people (amongst which I include myself), to have a "default" path to impact, even at the cost of reducing the pressure to be entrepreneurial or agenty. I think this is pretty undeniable when it comes to technical research, and cross-applies straightforwardly to policy research and advocacy. Ben and I agree that going into policy is much more valuable if you're thinking very strategically and [out of the "out of the box" box](https://www.lesswrong.com/posts/qu95AwSrKqQSo4fCY/the-outside-the-box-box) than if you're not. Given this mindset, there will probably turn out to be valuable non-standard things which you can do. Do note that this essay is intrinsically skewed since I haven't portrayed Ben's arguments in full fidelity and have spent many more words arguing my side. Also note that, despite being skeptical about some of Ben's points, I think his overall view is important and interesting and more people should be thinking along similar lines. *Thanks to Anjali Gopal for comments on drafts.*
eccddd97-d1d8-41ff-b4e5-792fd26e5942
trentmkelly/LessWrong-43k
LessWrong
Mech Interp Lacks Good Paradigms Note: I wrote this post rather quickly as an exercise in sharing rough / unpolished thoughts. I am also not an expert on some of the things I've written about. If you spot mistakes or would like to point out missed work / perspectives, please feel free!  Note 2: I originally sent this link to some people for feedback, but I was having trouble viewing the comments on the draft. The post was also in a reasonably complete state, so I decided to just publish it - and now I can see the comments! If you're one of those people, feedback is still very much welcome! Mechanistic Interpretability (MI) is a popular and rapidly growing field of technical AI safety research. As a field, it's extremely accessible, requiring comparatively few computational resources, and facilitates rapid learning, due to a very short feedback loop. This means that many junior researchers' first foray into AI safety research is in MI (myself included); indeed, this occurs to the extent where some people feel MI is over-subscribed relative to other technical agendas. However, how useful is this MI research?   A very common claim on MI's theory of impact (ToI) is that MI helps us advance towards a "grand unifying theory" (GUT) of deep learning. One of my big cruxes for this ToI is whether MI admits "paradigms" which facilitate correct thinking and understanding of the models we aim to interpret.  In this post, I'll critically examine several leading candidates for "paradigms" in MI, consider the available evidence for / against, and identify good future research directions (IMO). At the end, I'll conclude with a summary of the main points and an overview of the technical research items I've outlined.  Towards a Grand Unifying Theory (GUT) with MI Proponents of this argument believe that, by improving our basic understanding of neural nets, MI yields valuable insights that can be used to improve our agents, e.g. by improving architectures or by improving their training processes. This allows us
a4e966db-1384-43d5-91dc-0267a55e5c56
trentmkelly/LessWrong-43k
LessWrong
Rigged reward learning A putative new idea for AI control; index here. NOTE: What used to be called 'bias', is now called 'rigging', because 'bias' is very overloaded. The post has not yet been updated with the new terminology, however. What are the biggest failure modes of reward learning agents? The first failure mode is when the agent directly (or indirectly) chooses its reward function. ---------------------------------------- For instance, imagine a domestic robot that can be motivated to tidy (reward R0) or cook (reward R1). It has a switch that allows the human to choose the correct reward function. However, cooking gives a higher expected reward than tidying, and the agent may choose to set the switch directly (or manipulate the human's choice). In that case, it will set it to `cook'. In that case, the agent biases its reward learning process. A second failure mode (this version due to Jessica, original idea here) is when the agent influences its reward function without biasing it. For example, the domestic robot might be waiting for the human to arrive in an hour's time. It expected the human will be 50% likely to choose R0 (tidying) versus 50% likely to choose R1 (cooking). If instead the robot can randomise its reward switch now (with equal odds on R0 and R1), it can know its reward function early, and get in a full extra hour of tidying/cooking. A subsequent post will formalise influence, here let's look at bias. Formalising bias We can define bias in terms of P and ˆP. First of all, for a given policy π, we can say that ˆP is unbiased for π, if π preserves the expectation of ˆP. That is: * For all histories ht with t<m, ˆP(⋅∣ht)=Eπμ[ˆP(⋅∣ht+1)∣ht]. If the expectation of ˆP is preserved by any policy, then we can say that ˆP itself is unbiased: * The prior ˆP is unbiased is ˆP is unbiased for π for all policies π. Recall that ˆP=P on histories of length m. So ˆP being unbiased implies restrictions on P: * If ˆP is unbiased, then for all ht with t<m and fo
3a7279c8-4a34-4686-9c40-8ff8fe3bac40
trentmkelly/LessWrong-43k
LessWrong
Луна Лавгуд и Комната Тайн, Часть 4 Disclaimer: This is Kongo Landwalker's translation of lsusr's fiction Luna Lovegood and the Chamber of Secrets - Part 4 into russian. ---------------------------------------- Луна нашла на карте гостиную Когтеврана. Она поднялась в башню Когтеврана, а затем, казалось, ещё выше. Её отделяла от первых друзей всего одна дверь с бронзовым дверным молотком в форме орла, который загадывал загадки, подходящие для первокурсников. Она почти слышала приглушённый шум вечеринки за дверью. Луна постучала один раз. Орёл заговорил голосом Кандиды Когтевран: — Где моя диадема? У Луны не было в запасе достаточно часов до завтрака, чтобы отыскать Потерянную Диадему Когтеврана. Она свернулась калачиком в нише недалеко от места, где из Гостиной сочилось тепло. Она написала благодарственную записку домовикам за то, что они держат пол в чистоте. — Я назову тебя Ванда, — сказала Луна своему Мозгошмыгу. Ванда появилась на карте Мародёров. Луна щёлкнула языком, затем примотала банку прыского чая к уху лентой, чтобы Ванда могла питаться. Мозг Луны затуманился и она уснула. ---------------------------------------- Луна проснулась от топота студентов Когтеврана, направляющихся на завтрак. Луна вытянула Ванду из уха. Кто-то положил одеяло на Луну, пока она спала. Луна сбросила одеяло, прежде чем кто-то заметил её и понял, что она не решила загадку. Она стукнула по орлу проверить, но загадка не изменилась. По пути на завтрак Луна прошла мимо Забытой Библиотеки у подножия башни Когтеврана. Ей нужно было поесть, ведь она была человеком. Или она могла бы исследовать комнату, которая простояла тысячу лет и, вероятно, будет стоять ещё завтра. Вход в Забытую Библиотеку был серым семиугольником в стенe. Внутри излучался монохромный серый свет. Луна шагнула в Забытую Библиотеку.   Луна вышла из Забытой Библиотеки. Она проверила свою сумку — перо, пергамент и чернильница были на месте. Луна написала «Журнал Исследований» вверху пергамента. Луна вошла в Забытую Библиотеку.   Луна вышла из Заб
6aadc587-b0f9-4f8e-b696-fa40fcac3c3b
trentmkelly/LessWrong-43k
LessWrong
We run the Center for Applied Rationality, AMA CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21). Topics that may be interesting include (but are not limited to): * Why we think there should be a CFAR; * Whether we should change our name to be less general; * How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of recruiting/education for AI alignment research and wouldn't be happy doing only that. * How our curriculum has evolved. How it relates to and differs from the Less Wrong Sequences. Where we hope to go with our curriculum over the next year, and why. Several CFAR staff members will be answering questions, including: me, Tim Telleen-Lawton, Adam Scholl, and probably various others who work at CFAR. However, we will try to answer with our own individual views (because individual speech is often more interesting than institutional speech, and certainly easier to do in a non-bureaucratic way on the fly), and we may give more than one answer to questions where our individual viewpoints differ from one another's! (You might also want to check out our 2019 Progress Report and Future Plans. And we'll have some other posts out across the remainder of the fundraiser, from now til Jan 10.) [Edit: We're out of time, and we've allocated most of the reply-energy we have for now, but some of us are likely to continue slowly dribbling out answers from now til Jan 2 or so (maybe especially to replies, but also to some of the q's that we didn't get to yet). Thanks to everyone who participated; I really appreciate it.]
298b095c-2091-4149-a1f0-1dd9d025e074
trentmkelly/LessWrong-43k
LessWrong
Attribution-based parameter decomposition for linear maps TLDR: The simplicity loss currently used in APD (https://www.lesswrong.com/posts/EPefYWjuHNcNH4C7E/attribution-based-parameter-decomposition) is not scale invariant. By modifying this loss so that it is, APD seems to behave better in some circumstances. Also, for numerical stability, implementations of APD should add ϵ when computing the Schatten p-norm for p∈(0,1), because the gradient of xp blows up near x=0. Setup: The setup is that we have some input distribution x∈RM and a linear map W∈RD×M , and we perform APD with respect to the output Wx.  We take x to be a normalised gaussian (i.e uniform on the sphere), for simplicity. In addition, we take W to be the identity matrix. We also take M=D=100. APD initializes C components, which are each formed as the sum of pairwise outer products of a set of r vectors Ui and Vi. This outer product is used so that we can compute the simplicity loss efficiently later. The first step of APD is to calculate gradient attribution scores for each of our C components with respect to an input x. We have Ac(x)= ⎷∑Do=1(∑i,jd∑Mk=1WokxkdWijPCij)2D=√∑Do=1∑j(x2jP2c,oj)D=√∑Do=1(x2)TP2c,o:D We select the top-k components with the highest attribution scores, and then perform a second forward pass on this sparse subset of components, training for reconstruction loss, and training for low-rank active components. Let K be the sum of the top-k components, and L be the sum of all the components. Then the reconstruction loss is ||Wx−Kx||2 and the faithfulness loss is ∑i,j(Wij−Lij)2 Simplicity loss drives for low rank, penalizing the lp-norm (technically a quasi-norm) of the spectra of active components for p∈(0,1), making the spectra sparse (because we have a lower bound on the Frobenius norm of useful active components, so can't just drive the spectrum to 0). Behaviour of APD: In practice, faithfulness loss goes to very close to 0 quite quickly, and so we can restrict to just changing the hyperparameters of simplicity and minimality
4544cd61-0a2a-40fd-afe3-8605a1e698a1
trentmkelly/LessWrong-43k
LessWrong
Solving for the optimal work-life balance with geometric rationality In a recent post, Scott Garrabrant gave an application of geometric rationality to the problem of work-life balance. Here's the setup: part of you wants to try to make the world better (I'll be calling this your "altruistic part") and part of you wants to relax and play video games (your "relaxation part"). Geometric rationality suggests doing a Nash bargain between the altruistic part and the relaxation part, across possible worlds that you might have found yourself in. In worlds where you're in an unusually good position to make the world better, the bargain commits you to spend most of your time doing that (satisfying your altruistic part); in return, in worlds where you're not in a good position to make the world better, you spend most of your time playing video games (satisfying your relaxation part). Scott built a toy mathematical model and tested it with a nice example. The example involved five possible worlds, all equally likely. The way the math worked out, if you ranked the five worlds from least conducive to your altruism to most conducive, the Nash bargain had you spend 0%, 50%, 67%, 75%, and 80% of your time on altruism, respectively, in those five worlds. A pretty nice, intuitively satisfying list of numbers.   I recommend reading Scott's post before reading this one. And while I really liked the post, his example was constructed so that the probability distribution over how much altruistic impact you could have was pretty close to a uniform distribution. In the example, the world that was most conducive to your altruistic impact only gave you 1.7x more impact than the median world. But I think impact is much more unevenly distributed than that across possible worlds. My issue with Scott's post wasn't his model (I quite liked the model!); it was only his example. So I decided to take the model and solve for the optimal bargain in more generality. I wanted to see what the results would be -- whether they'd still be intuitively compelling.   The r
c1bad225-2256-4906-bd3b-a436e1f1c5e9
trentmkelly/LessWrong-43k
LessWrong
Current AI Safety Roles for Software Engineers [Note: Please make sure to see the comments for other, newer information] I've had several conversations over the last few months with engineers who were trying to enter the field of AI safety. It became evident that I was giving pretty much the same advice to all of them, so I finally decided to write it up. Some more context; late last year it became evident to me that funding was becoming much less of an issue around AI safety and engineering expertise was becoming more of an issue. I decided to leave my job to work in the area. I spent some time consulting for Ought, then eventually came to a point where it seemed more useful to do some self-studying for a while. During that period I spoke to several people in the Bay Area about engineering needs at AI safety organizations. The hiring situation still seems a bit confusing to me. There are a lot of EA engineers who seem to want to do direct EA work, but are not sure what jobs they could get. Most AI safety organizations seem to really desire to find more good employees (and the AI-oriented ones, engineers), but are still fairly selective in their choices. I think that these organizations have typically been able to be selective, would prefer to do so when possible, and also have special demands that come from being small, new, and theoretical / EA. If you are an engineer desiring to work in an EA organization soon or in the future, I suggest either getting really good at a few skills particularly useful to EA organizations (reinforcement learning, functional programming, ML), getting really good at startup engineering skills, or getting good at non-engineering skills desired by EA organizations. From what I've seen, spending marginal years on "generic medium-large company backend skills" is often not that useful for future EA positions at this point or expected to be in the future. The following list represents the main organizations I've considered for work around AI safety, starting as an engineer without
d5bb0848-64ce-4797-b6fb-fd126f633063
trentmkelly/LessWrong-43k
LessWrong
Why are probabilities represented as real numbers instead of rational numbers? I have started going through Jaynes’ book on probability. In Chapter 1 (pg 17) he lists the basic desiderata for the theory. The first desiderata is that “Degrees of plausibility are represented by the real numbers”. I understand why we want to use numbers, and why we want continuity, but why do we specifically want to use real numbers instead of rational numbers?
2d34c27d-15da-4ccb-a43a-fd86492f64b1
trentmkelly/LessWrong-43k
LessWrong
OpenAI: Altman Returns As of this morning, the new board is in place and everything else at OpenAI is otherwise officially back to the way it was before. Events seem to have gone as expected. If you have read my previous two posts on the OpenAI situation, nothing here should surprise you. Still seems worthwhile to gather the postscripts, official statements and reactions into their own post for future ease of reference. What will the ultimate result be? We likely only find that out gradually over time, as we await both the investigation and the composition and behaviors of the new board. I do not believe Q* played a substantive roll in events, so it is not included here. I also do not include discussion here of how good or bad Altman has been for safety. SAM ALTMAN’S STATEMENT Here is the official OpenAI statement from Sam Altman. He was magnanimous towards all, the classy and also smart move no matter the underlying facts. As he has throughout, he has let others spread hostility, work the press narrative and shape public reaction, while he himself almost entirely offers positivity and praise. Smart. > Before getting to what comes next, I’d like to share some thanks. > > I love and respect Ilya, I think he’s a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI. > > I am grateful to Adam, Tasha, and Helen for working with us to come to this solution that best serves the mission. I’m excited to continue to work with Adam and am sincerely thankful to Helen and Tasha for investing a huge amount of effort in this process. > > Thank you also to Emmett who had a key and constructive role in helping us reach this outcome. Emmett’s dedication to AI safety and balancing stakeholders’ interests was clear. > > Mira did an amazing job throughout all of this, serving the mission, the team, and the company
f22b65b3-82af-41e3-a1c2-33ec099a680e
trentmkelly/LessWrong-43k
LessWrong
Good books for incoming college students? My sister (and about 2.5 million other people) are headed to college in the fall. I gave her a copy of Cal Newport's How to Win at College as a graduation gift, but given that her life is about to change more than it has in any of the past 14 years, one book probably isn't enough. What books do you think incoming/recently arrived college students should be reading? You can assign reading with any motivation you'd like, but I'm looking especially hard for books that meet the following criteria:   * An average to somewhat-above-average college student can read them without much struggle. * They have some practical application in college life/job seeking/being a good adult (rather than just being a personal favorite book). * They are easy to find and budget-friendly (free online/cheap on Amazon/probably in the college library). * They are not Oh, The Places You'll Go, just to head you pranksters off at the pass. Bonus points if it's a book that you read in late high school or college and you can tell us what impact it had on your life at the time! My suggestions would include Getting Things Done, Thinking Fast and Slow, Redirect, and The Charisma Myth. What would you suggest?  
2016a369-3c17-4727-9d29-7dfb804e1628
trentmkelly/LessWrong-43k
LessWrong
Meetup : Phoenix, AZ: Stoicism and Visitors: 6 December 6:00PM Discussion article for the meetup : Phoenix, AZ: Stoicism and Visitors: 5 December 6:00PM WHEN: 06 December 2012 06:00:00PM (-0700) WHERE: 1038 South Mill Avenue, Tempe, AZ Jay will give a brief presentation on Stoicism and we will get to see some friends from Australia! The meeting will be at the Chipotle near Gammage Theater. Discussion article for the meetup : Phoenix, AZ: Stoicism and Visitors: 5 December 6:00PM
240aae6d-5c6b-4435-94a9-10710ca5e1da
trentmkelly/LessWrong-43k
LessWrong
[LINK] Creationism = High Carb? Or, The Devil Does Atkins Based on the community's continuing interests in diet and religion, I'd like to point out this blog post by the coauthor of Protein Power, Michael Eades, wherein he suggests that biblical literalism tends toward a low-fat approach to nutrition over a low-carb philosophy, by essentially throwing out a bunch of evidence on the matter: > Why, you might ask, is this scientist so obdurate in the face of all the evidence that’s out there?  Perhaps because much of the evidence isn’t in accord with his religious beliefs.  I try never to mention a person’s religious faith, but when it impacts his scientific thinking it at least needs to be made known.  Unless he’s changed his thinking recently, Dr. Eckel apparently is one of the few academic scientists who are literal interpreters of the bible.  I assume this because Dr. Eckel serves on the technical advisory board of the Institution for Creation Research, an organization that believes that not only is the earth only a few thousand years old , but that the entire universe in only a few thousand years old.  And they believe that man was basically hand formed by God on the sixth day of creation.  And Dr. Eckel’s own writings on the subject appear to confirm his beliefs > > [.....] > > Of all the evidence that exists, I think the evolutionary/natural selection data and the anthropological data are the most compelling because they provide the largest amount of evidence over the longest time.  To Dr. Eckel, however, these data aren’t applicable because in his worldview prehistoric man didn’t exist and therefore wasn’t available to be molded by the forces of natural selection.  I haven’t a clue as to what he thinks the fossil remains of early humans really were or where they came from.  Perhaps he believes – as I once had it explained to me by a religious fundamentalist – these fossilized remains of dinosaurs, extinct ancient birds and mammals and prehistoric man were carefully buried by the devil to snare the unwary and the un
b71a00b4-b56d-41ed-93ad-13eb6fc5b6c2
StampyAI/alignment-research-dataset/arxiv
Arxiv
Scalable agent alignment via reward modeling: a research direction 1 Introduction --------------- Games are a useful benchmark for research because progress is easily measurable. Atari games come with a score function that captures how well the agent is playing the game; board games or competitive multiplayer games such as Dota 2 and Starcraft II have a clear winner or loser at the end of the game. This helps us determine empirically which algorithmic and architectural improvements work best. However, the ultimate goal of machine learning (ML) research is to go beyond games and improve human lives. To achieve this we need ML to assist us in real-world domains, ranging from simple tasks like ordering food or answering emails to complex tasks like software engineering or running a business. Yet performance on these and other real-world tasks is not easily measurable, since they do not come readily equipped with a reward function. Instead, the objective of the task is only indirectly available through the intentions of the human user. This requires walking a fine line. On the one hand, we want ML to generate creative and brilliant solutions like AlphaGo’s Move 37 (Metz, [2016](#bib.bib120))—a move that no human would have recommended, yet it completely turned the game in AlphaGo’s favor. On the other hand, we want to avoid degenerate solutions that lead to undesired behavior like exploiting a bug in the environment simulator (Clark & Amodei, [2016](#bib.bib43); Lehman et al., [2018](#bib.bib111)). In order to differentiate between these two outcomes, our agent needs to understand its user’s *intentions*, and robustly achieve these intentions with its behavior. We frame this as the *agent alignment problem*: > > *How can we create agents that behave in accordance with the user’s intentions?* > > > With this paper we outline a research direction to solve the agent alignment problem. We build on taxonomies and problem definitions from many authors before us, highlighting tractable and neglected problems in the field of *AI safety* (Russell et al., [2015](#bib.bib142); Soares, [2015](#bib.bib155); Amodei et al., [2016](#bib.bib8); Taylor et al., [2016](#bib.bib164); Soares & Fallenstein, [2017](#bib.bib156); Christiano, [2017](#bib.bib40); Leike et al., [2017](#bib.bib114); Ortega et al., [2018](#bib.bib133); and others). We coalesce these problems into a coherent picture and explain how solving them can yield a solution to the agent alignment problem. ##### Alignment via reward modeling. [Section 3](#S3 "3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction") presents our approach to the agent alignment problem, cast in the reinforcement learning framework (Sutton & Barto, [2018](#bib.bib161)). We break the problem into two parts: (1) learning a reward function from the feedback of the user that captures their intentions and (2) training a policy with reinforcement learning to optimize the learned reward function. In other words, we separate learning what to achieve (the ‘What?’) from learning how to achieve it (the ‘How?’). We call this approach *reward modeling*. [Figure 1](#S1.F1 "Figure 1 ‣ Alignment via reward modeling. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") illustrates this setup schematically. agent environment reward model user observationtrajectoriesfeedbackrewardaction Figure 1: Schematic illustration of the reward modeling setup: a reward model is trained with user feedback; this reward model provides rewards to an agent trained with RL by interacting with the environment. As we scale reward modeling to complex general domains, we expect to encounter a number of challenges ([Section 4](#S4 "4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction")). The severity of these challenges and whether they can be overcome is currently an open research question. Some promising approaches are discussed in [Section 5](#S5 "5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction"). Eventually we want to scale reward modeling to domains that are too complex for humans to evaluate directly. To apply reward modeling to these domains we need to boost the user’s ability to evaluate outcomes. In [Section 3.2](#S3.SS2 "3.2 Recursive reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction") we describe how reward modeling can be applied *recursively*: agents trained with reward modeling can assist the user in the evaluation process when training the next agent. Training aligned agents is our goal, but how do we know when we have achieved it? When deploying agents in the real world, we need to provide evidence that our agents are actually sufficiently aligned, so that users can *trust* them. [Section 6](#S6 "6 Establishing trust ‣ Scalable agent alignment via reward modeling: a research direction") discusses five different research avenues that can help increase trust in our agents: design choices, testing, interpretability, formal verification, and theoretical guarantees. ##### Desiderata. Our solution to the agent alignment problem aims to fulfill the following three properties. * • Scalable. Alignment becomes more important as ML performance increases, and any solution that fails to scale together with our agents can only serve as a stopgap. We desire alignment techniques that continue to work in the long term, i.e. that can scale to agents with superhuman performance in a wide variety of general domains (Legg & Hutter, [2007](#bib.bib110)). * • Economical. To defuse incentives for the creation of unaligned agents, training aligned agents should not face drawbacks in cost and performance compared to other approaches to training agents. * • Pragmatic. Every field has unsolved problems that remain even after our understanding has matured enough to solve many practical problems. Physicists have not yet managed to unify gravity with the other three elementary forces, but in practice we understand physics well enough to fly to the moon and build GPS satellites. Analogously, we do not intend to sketch a solution to all safety problems. Instead, we aim at a minimal viable product that suffices to achieve agent alignment in practice. Moreover, while reaching 100% trust in our systems is impossible, it is also not necessary: we only need to aim for a level of trust at which we can confidently say that our new systems are more aligned than the current systems (Shalev-Shwartz et al., [2017](#bib.bib152)). ##### Assumptions. Our research direction rests on two assumptions. The first assumption is based on the intuition that learning others’ intentions is easy enough that most humans can do it. While doing so involves understanding a lot of inherently fuzzy concepts in order to understand what others want, machine learning has had considerable success at learning estimators for inherently fuzzy concepts (e.g. what visually distinguishes cats and dogs) provided we have enough labeled data (LeCun et al., [2015](#bib.bib109)). Thus it seems reasonable to expect that we can also learn estimators that capture whatever fuzzy concepts are necessary for understanding the user’s intentions rather than having to formally specify them. Moreover, some user intentions may lack a simple, crisp formalization, and thus may *require* learning a specification. > > > ###### Assumption 1 > > > > We can learn user intentions to a sufficiently high accuracy. > > > > > When phrased in terms of AI safety problems, this assumption states that we can learn to avoid various *specification problems* (Leike et al., [2017](#bib.bib114); Ortega et al., [2018](#bib.bib133)) in practice. In other words, we assume that with enough model capacity and the right training algorithms we can extract the user’s intentions from data. Needless to say, there are many problems with current scalable machine learning techniques such as vulnerability to adversarially perturbed inputs (Szegedy et al., [2013](#bib.bib163)) and poor performance outside of the training distribution, which are relevant but not contradictory to this claim. The second assumption rests on the intuition that for many tasks that we care about, it is easier for the user to evaluate an outcome in the environment than it would be to teach behavior directly. If this is true, this means that reward modeling enables the user to train agents to solve tasks they could not solve themselves. Furthermore, this assumption would allow us to bootstrap from simpler tasks to more general tasks when applying reward modeling recursively. > > > ###### Assumption 2 > > > > For many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. > > > > > The notion of easier we employ here could be understood in terms of amount of labor, effort, or the number of insights required. We could also understand this term analogous to more formal notions of difficulty in computational complexity theory (see e.g. Arora & Barak, [2009](#bib.bib20)). There are examples where [Assumption 2](#Thmassumption2 "Assumption 2 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") is not true: for instance, tasks that have a low-dimensional outcome space (such as in the case of yes & no questions). However, this assumption is recovered as soon as the user also desires an explanation for the answer since the evaluation of an explanation is typically easier than producing it. ##### Disclaimer. It is important to emphasize that the success of the research direction we describe here is not guaranteed and it should *not* be understood as a plan that, when executed, achieves agent alignment. Instead, it outlines what research questions will inform us whether or not reward modeling is a scalable solution to alignment. We are *not* considering questions regarding the *preference payload*: whose preferences should the agent be aligned to? How should the preferences of different users be aggregated and traded off against each other (Baum, [2017](#bib.bib32); Prasad, [2018](#bib.bib137))? When should the agent be disobedient (Milli et al., [2017](#bib.bib121))? We claim that the approach described is agnostic to the ethical paradigm, the user’s preferences, and the legal or social framework, provided we can supply enough feedback (though the preference payload might influence the amount of feedback required). These questions are treated as outside of the scope of this paper, despite their obvious importance. Instead, the aim of this document is to discuss the agent alignment problem from a technical perspective in the context of aligning a single agent to a single user. 2 The agent alignment problem ------------------------------ The conversation around the alignment problem has a long history going back to science fiction (Asimov, [1942](#bib.bib22)). In a story, [Asimov](#bib.bib22) proposes *three laws of robotics* that are meant to align robots to their human operators; the story then proceeds to point out flaws in these laws. Since then, the agent alignment problem has been echoed by philosophers (Bostrom, [2003](#bib.bib35), [2014](#bib.bib36); Yudkowsky, [2004](#bib.bib181)) and treated informally by technical authors (Wiener, [1960](#bib.bib174); Etzioni & Weld, [1994](#bib.bib53); Omohundro, [2008](#bib.bib129)). The first formal treatment of the agent alignment problem is due to Dewey ([2011](#bib.bib46)) and has since been refined (Hadfield-Menell et al., [2016](#bib.bib78); Everitt & Hutter, [2018](#bib.bib57)). We frame the agent alignment problem as a sequential decision problem where an *agent* interacts sequentially with an *environment*111Formally specified by a *partially observable Markov decision process without reward function* (POMDP∖\setminus∖R; Sutton & Barto, [2018](#bib.bib161)). over a number of (discrete) timesteps. In every timestep, the agent takes an *action* (e.g. a motor movement or a keyboard stroke) and receives an *observation* (e.g. a camera image). The agent’s actions are specified by its *policy*, which is a mapping from the current *history* (the sequence of actions taken and observations received so far) to a distribution over the next action. Additionally, the agent can interact with the user via an interaction protocol that allows the user to communicate their intentions to the agent. This interaction protocol is unspecified to retain flexibility. *A solution to the agent alignment problem is a policy producing behavior that is in accordance with the user’s intentions* (thus is not determined by the environment alone). There are many forms of interaction that have been explored in the literature: providing a set of demonstrations of the desired behavior (Russell, [1998](#bib.bib141); Ng & Russell, [2000](#bib.bib126); Abbeel & Ng, [2004](#bib.bib1); Argall et al., [2009](#bib.bib16)); providing feedback in the form of scores (El Asri et al., [2016](#bib.bib51)), actions (Griffith et al., [2013](#bib.bib76)), value (Knox & Stone, [2009](#bib.bib99)), advantage (MacGlashan et al., [2017](#bib.bib116)), or preferences over trajectories (Fürnkranz et al., [2012](#bib.bib61); Akrour et al., [2012](#bib.bib5), [2014](#bib.bib6); Wirth et al., [2017](#bib.bib176)); and providing an explicit objective function (Hadfield-Menell et al., [2017b](#bib.bib80)). A special case of interaction is *reinforcement learning* where the user specifies a reward function that provides a scalar *reward* in addition to the observation in every timestep; the agent’s objective is to select actions to maximize average or exponentially discounted reward (Sutton & Barto, [2018](#bib.bib161)). ### 2.1 Design specification problems Solving the agent alignment problem requires solving all design specification problems (Leike et al., [2017](#bib.bib114); Ortega et al., [2018](#bib.bib133)). These are safety problems that occur when the agent’s incentives are misaligned with the objectives the user intends the agent to have. Examples for specification problems include the following undesirable incentives (see also Omohundro, [2008](#bib.bib129)): * • *Off-switch problems* (Soares et al., [2015](#bib.bib157); Orseau & Armstrong, [2016](#bib.bib131); Hadfield-Menell et al., [2017a](#bib.bib79)): the agent is typically either incentivized to turn itself off or to prevent itself from being turned off. * • *Side-effects* (Armstrong & Levinstein, [2017](#bib.bib18); Zhang et al., [2018b](#bib.bib185); Krakovna et al., [2018](#bib.bib101)): the agent is not incentivized to reduce effects unrelated to its main objectives, even if those are irreversible or difficult to reverse. * • *Absent supervisor* (Leike et al., [2017](#bib.bib114)): the agent is incentivized to find shortcuts and cheat when not under supervision and to disable its monitoring systems. * • *Containment breach* (Yampolskiy, [2012](#bib.bib179); Babcock et al., [2016](#bib.bib25)): the agent might have an incentive to disable or circumvent any containment measures that are intended to limit its operational scope. * • *Creation of subagents* (Arbital, [2016](#bib.bib15)): the agent might have an incentive to create other potentially unaligned agents to help it achieve its goals. * • … Misaligned objectives are currently in common usage in machine learning: BLEU score (Papineni et al., [2002](#bib.bib134)) is typically used to measure translation accuracy. Inception score (Salimans et al., [2016](#bib.bib143)) and the Frechét inception distance (Heusel et al., [2017](#bib.bib84)) are used to measure the image quality of generative models. Yet these measures are not *aligned* with our intentions: they are a poor proxy for the actual performance and produce degenerate solutions when optimized directly (Barratt & Sharma, [2018](#bib.bib28)). ### 2.2 Difficulty of agent alignment The following two aspects can modulate the difficulty of the alignment problem. In particular, if we want to use ML to solve complex real-world problems, we might need to be able to handle the most difficult combinations of these. ##### The scope of the task. The difficulty of the agent alignment problem depends on a number of aspects of the task. Some of them make it easier for the agent to produce harmful behavior and others make it more difficult to understand the user’s intentions. 1. 1. The complexity of the task. The more complex the task, the more details the agent needs to know about the user’s intentions. 2. 2. The nature and number of actuators in the environment. a single robot arm is more constrained than an agent interacting with the internet through a web browser. 3. 3. The opportunities for unacceptable outcomes within the task. For example, when selecting music for the user there are fewer possibilities for causing damage than when cleaning a room. ##### The performance of the agent. When training reinforcement learning (RL) agents, various levers exist to increase or stunt their performance: the choice of algorithms—e.g. A3C (Mnih et al., [2016](#bib.bib123)) vs. IMPALA (Espeholt et al., [2018](#bib.bib52))—the number of training steps, the choice of training environments, the model capacity, the planning horizon, the number of Monte Carlo tree search rollouts (Silver et al., [2016](#bib.bib153)), etc. The higher the agent’s performance, the more likely it could be to produce surprising unintended behavior. On the other hand, higher levels of performance could also lead to more aligned behavior because the agent is more competent at avoiding unsafe states. Therefore different levels of agent performance tolerate different degrees of misalignment, and require different degrees of trust in the system. 3 Scaling reward modeling -------------------------- Modern techniques for training RL agents can be decomposed into algorithmic choices such as Q-learning (Watkins & Dayan, [1992](#bib.bib173)) or policy gradient (Williams, [1992](#bib.bib175)) and architectural choices for general-purpose function approximators. The currently most successful function approximators are deep neural networks trained with back-propagation (Rumelhart et al., [1986](#bib.bib140)). These are low bias and high variance parametric estimators that tend to consume a lot of data and are prone to overfitting, but have a history of scaling well to very high-dimensional problems (Krizhevsky et al., [2012](#bib.bib103); LeCun et al., [2015](#bib.bib109)). For a more detailed introduction to reinforcement learning and deep learning, we refer the reader to Sutton & Barto ([2018](#bib.bib161)) and Goodfellow et al. ([2016](#bib.bib71)) respectively. In recent years the machine learning community has made great strides in designing more and more capable deep reinforcement learning algorithms, both value-based methods derived from Q-learning (Mnih et al., [2015](#bib.bib122)) and policy-gradient methods (Schulman et al., [2015](#bib.bib147); Lillicrap et al., [2015](#bib.bib115)). Major improvements have originated from scaling deep RL to a distributed setting across many machines (Mnih et al., [2016](#bib.bib123); Schulman et al., [2017](#bib.bib148); Barth-Maron et al., [2018](#bib.bib29); Horgan et al., [2018](#bib.bib86); Espeholt et al., [2018](#bib.bib52); Anonymous, [2019a](#bib.bib11)). The RL paradigm is general enough that we can phrase essentially all economically valuable tasks that can be done on a computer in this paradigm (e.g. interactively with mouse and keyboard). Yet there are still many challenges to be solved in order to make deep RL useful in the real world (Stadelmann et al., [2018](#bib.bib158); Irpan, [2018](#bib.bib90); Marcus, [2018](#bib.bib118)); in particular, we need algorithms that can learn to perform complex tasks as intended in the absence of a hand-engineered reward function. In the following sections, we describe our research direction to solving the alignment problem in detail. It is cast in the context of deep reinforcement learning. While this direction relies heavily on the reinforcement learning framework, most challenges and approaches we discuss do not inherently rely on deep neural networks and could be implemented using other scalable function approximators. ### 3.1 Reward modeling Our research direction is centered around *reward modeling*. The user trains a *reward model* to learn their intentions by providing feedback. This reward model provides rewards to a reinforcement learning agent that interacts with the environment. Both processes happen concurrently, thus we are training the agent with the user in the loop. [Figure 1](#S1.F1 "Figure 1 ‣ Alignment via reward modeling. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") illustrates the basic setup. In recent years there has been a growing body of work on prototyping learning from different forms of reward feedback with deep neural networks. This includes trajectory preferences (Christiano et al., [2017](#bib.bib41); Kreutzer et al., [2018](#bib.bib102)), goal state examples (Bahdanau et al., [2018](#bib.bib26)), demonstrations (Finn et al., [2016](#bib.bib60); Ho & Ermon, [2016](#bib.bib85)), as well as combinations thereof (Tung et al., [2018](#bib.bib168); Ibarz et al., [2018](#bib.bib89)). ##### Credit assignment. To perform well on a task requires solving the *credit assignment problem*: how can an outcome be attributed to specific actions taken in the past? For example, which moves on the Go board led to winning the match? Which joystick movements lead to an increase in game score? Depending on the domain and the sparsity of the reward, this problem can be very difficult to solve. In contrast, reward modeling allows us to shift the burden of solving the credit assignment problem from the user to the agent. This is achieved by using RL algorithms to produce behavior that is judged favorably by the user, who only has to evaluate outcomes. If [Assumption 2](#Thmassumption2 "Assumption 2 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") is true, then teaching a reward function is easier than performing the task itself. Several feedback protocols, such as demonstrations and value/advantage feedback, require the user to know how to produce approximately optimal behavior on the task. This is limiting because it puts the burden of solving the credit assignment problem onto the user. In these cases, following the user-induced behavior typically does not lead to strongly superhuman performance. In contrast, reward modeling is also compatible with the user providing hints about the optimal behavior. If the user has some insight into the credit assignment problem, they could use *reward shaping* (Ng et al., [1999](#bib.bib127)) to teach a reward function that is shaped in the direction of this behavior. ##### Advantages of reward modeling. Learning a reward function separately from the agent’s policy allows us to disentangle the agent’s objective from its behavior. If we understand the reward function, we know what the agent is optimizing for; in particular, we know whether its intentions are aligned with the user’s intentions. This has three advantages that could help make reward modeling economical: 1. 1. The user does not have to provide feedback on every interaction between agent and environment, as would be the case if we trained a policy from user feedback directly. Since deep RL algorithms tend to be very sample-inefficient (e.g. taking weeks of real-time to learn to play an Atari game), providing feedback on every interaction is usually not practical. 2. 2. We can distinguish between alignment of the policy and alignment of the reward model (Ibarz et al., [2018](#bib.bib89)). 3. 3. We can leverage progress on deep RL agents by plugging a more capable agent into our reward modeling setup. 4. 4. The user does not need to solve the credit assignment problem. ##### Design specification problems. The ambition of reward modeling is to solve *all* design specification problems: all we need to do is equip the agent with the ‘correct’ reward function—a reward function that does not include the undesired incentives listed above or punishes any behavior that results from them. The design specification problems above are fuzzy human-understandable concepts and stem from an intuitive understanding of what the user would not want the agent to do. Our approach rests on [Assumption 1](#Thmassumption1 "Assumption 1 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction"), that we should be able to teach these concepts to our agents; if we can provide the right data and the reward model generalizes correctly, then we should be able to learn this ‘correct’ reward function to a sufficiently high accuracy. Consequently the design specification problems should disappear. In this sense reward modeling is meant to be a one-stop solution for this entire class of safety problems. To justify this ambition, consider this simple existence proof: let H𝐻Hitalic\_H be the set of histories that correspond to aligned behavior that avoids all the specification problems listed above. If the set H𝐻Hitalic\_H is not empty, then there exists a reward function r𝑟ritalic\_r such that any corresponding optimal policy πr\*subscriptsuperscript𝜋𝑟\pi^{\*}\_{r}italic\_π start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT produces behavior from H𝐻Hitalic\_H with probability 1111. A trivial example of such a reward function r𝑟ritalic\_r rewards the agent every few steps if and only if its history is an element of the set H𝐻Hitalic\_H. In theory we could thus pick this reward function r𝑟ritalic\_r to train our RL agent. However, in practice we also need to take into account whether our reward model has enough capacity to represent r𝑟ritalic\_r, whether r𝑟ritalic\_r can be learned from a reasonable amount of data (given the inductive biases of our model), whether the reward model generalizes correctly, and whether the resulting behavior of the RL agent produces behavior that is close enough to H𝐻Hitalic\_H. We discuss these challenges in [Section 4](#S4 "4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction"). ##### Learning to understand user feedback. Humans generally do poorly at training RL agents by providing scalar rewards directly; often they teach a shaped reward function and provide rewards that depend on the agent’s policy (Thomaz & Breazeal, [2008](#bib.bib165); MacGlashan et al., [2017](#bib.bib116)). Which form or combination of feedback works well for which domain is currently an open research question. In the longer term we should design algorithms that learn to adapt to the way humans provide feedback. However, this presents a bootstrapping problem: how do we train an algorithm that learns to interpret feedback, if it itself does not already know how to interpret feedback? We need to expand our feedback ‘language’ for communicating intentions to reward models, starting with well-established forms of feedback (such as preference labels and demonstrations) and leveraging our existing feedback ‘vocabulary’ at every step. The recursive application of reward modeling presented in the following section is one way to approach this. ### 3.2 Recursive reward modeling In some tasks it is difficult for human users to directly evaluate outcomes. There are a number of possible reasons: the domain might be extremely technical (e.g. x86 machine code), highly complex (e.g. a corporate network or a folded protein), very high-dimensional (e.g. the internal activations of a neural network), have delayed effects (e.g. introduction of a new gene into an existing ecosystem), or be otherwise unfamiliar to humans. These tasks cannot be solved with reward modeling by unaided humans (Christiano et al., [2018](#bib.bib42)). agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT environment reward model user agent Ak−1subscript𝐴𝑘1A\_{k-1}italic\_A start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT observationtrajectoriesfeedbackrewardactioninteraction Figure 2: *Recursive reward modeling*: agent Ak−1subscript𝐴𝑘1A\_{k-1}italic\_A start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT interacts with the user to assist in evaluation process for training reward model and agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. Recursively applied, this allows the user to train agents in increasingly complex domains in which they could not evaluate outcomes themselves. In order to scale reward modeling to these tasks, we need to boost the user’s ability to provide feedback. This section describes one potential solution that we call *recursive reward modeling*: leveraging agents trained with reward modeling on simpler tasks in more narrow domains in order to train a more capable agent in a more general domain. ##### Setup. Imagine repeating the following procedure. In step 1, we train agent A1subscript𝐴1A\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT with reward modeling from user feedback as described in the previous section. In step k𝑘kitalic\_k we use the agent Ak−1subscript𝐴𝑘1A\_{k-1}italic\_A start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT to assist the user in evaluating outcomes when training agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. This assistance can take various forms: providing relevant auxiliary information, summarizing large quantities of data, interpreting agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT’s internals, solving sub-problems that the user has carved off, and so on. With this assistance the user is then able provide feedback to train the next agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT (see [Figure 2](#S3.F2 "Figure 2 ‣ 3.2 Recursive reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction")). Note that the task agent Ak−1subscript𝐴𝑘1A\_{k-1}italic\_A start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT is trained to solve, assisting in the evaluation of outcomes on the task of Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, is different from the task that Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT is trained to solve. While this kind of sequential training is conceptually clearer, in practice it might make more sense to train all of these agents jointly to ensure that they are being trained on the right distribution. Moreover, all of these agents may share model parameters or even be copies of the same agent instantiated as different players in an adversarial game. ##### Examples. As an example, consider the hypothetical *fantasy author task*: we want to train an agent A𝐴Aitalic\_A to write a fantasy novel. Providing a reward signal to this agent is very difficult and expensive, because the user would have to read the entire novel and assess its quality. To aid this evaluation process, the user is assisted by an agent that provides auxiliary input: extracting a summary of the plotline, checking spelling and grammar, summarizing character development, assessing the flow of the prose, and so on. Each of these tasks is strictly simpler than writing a novel because they focus on only one aspect of the book and require producing substantially less text (e.g. in contrast to novel authorship, this evaluation assistance could be done by most educated humans). The tasks this assistant agent performs are in turn trained with reward modeling. Another example is the *academic researcher task*: we want to train an agent to perform a series of experiments and write a research paper. To evaluate this research paper, we train another agent to review that the experiments were performed correctly, the paper is clear and well-written, interesting, novel, and accurately reflects the experimental results. While writing a stellar paper requires a lot of domain expertise, brilliance, and hard work, assessing the quality of a research result is often much easier and routinely done by a large network of peer reviewers. Recursive reward modeling is also somewhat analogous to human organizations. Imagine a company in which every manager only needs to evaluate the performance of their reports, increasing and decreasing their salary accordingly. This evaluation is being assisted by other teams in the organization. The managers in turn get evaluated on the performance of their team. This scheme proceeds up to the CEO who provides instructions to the managers reporting to them. In this analogy, the user plugs into every part of the hierarchy: teaching individual employees how to perform their job, teaching managers how to evaluate their reports, and providing instructions to the CEO. If every employee of this company is very competent at their job, the whole company can scale to solve very complex and difficult problems that no human alone could solve or even evaluate on short timescales. ##### Discussion. In order for this recursive training procedure to scale, the task of agent Ak−1subscript𝐴𝑘1A\_{k-1}italic\_A start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT needs to be a simpler task in a more narrow domain compared to the task of agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. If evaluating outcomes is easier than producing behavior ([Assumption 2](#Thmassumption2 "Assumption 2 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction")), then recursive reward modeling would build up a hierarchy of agents that become increasingly more capable and can perform increasingly general tasks. As such, recursive reward modeling can be thought of as an instance of *iterated amplification* (Christiano et al., [2018](#bib.bib42)) with reward modeling instead of supervised learning or imitation learning. As k𝑘kitalic\_k increases, the user plays a smaller and smaller part of the overall workload of this evaluation process and relies more and more on the assistance of other agents. In essence, the user’s feedback is becoming increasingly leveraged. We can imagine the user’s contribution to be on an increasingly higher levels of abstraction or to be increasingly coarse-grained. Thus the user is leaving more and more details ‘to be filled in’ by automated systems once they are confident that the automated systems can perform these tasks competently, i.e. once the user *trusts* these systems. How should the user decompose task evaluation? They need to assign evaluation assistance tasks that are simpler to the previous agent, and combine the result into an aggregated evaluation. This decomposition needs to be exhaustive: if we neglect to assess one aspect of the task outcome, then the new agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT might optimize it in an arbitrary (i.e. undesirable) direction. This is another problem that we hope to solve with recursive reward modeling: We can have an agent A2subscript𝐴2A\_{2}italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT propose a decomposition of the task evaluation and have another agent A1subscript𝐴1A\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT critique it by suggesting aspects the decomposition is omitting. Alternatively, the feedback for the decomposition proposal could also be based on downstream real-world outcomes. An important open question is whether errors accumulate: do the mistakes of the more narrow agent Ak−1subscript𝐴𝑘1A\_{k-1}italic\_A start\_POSTSUBSCRIPT italic\_k - 1 end\_POSTSUBSCRIPT lead to larger mistakes in the training of agent Aksubscript𝐴𝑘A\_{k}italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT? Or can we set up the training process to be self-correcting such that smaller mistakes get dampened (e.g. using ensembles of agents, training agents to actively look for and counteract these mistakes, etc.)? If error accumulation can be bounded and reward modeling yields aligned agents, then the hierarchy of agents trained with recursive reward modeling can be argued to be aligned analogously to proving a statement about natural numbers by induction. ##### Analogy to complexity theory. In the reward modeling setup the agent proposes a behavior that is evaluated by the user. This is conceptually analogous to solving existentially quantified first-order logic formulas such as ∃x.φ(x)formulae-sequence𝑥𝜑𝑥\exists x.\,\varphi(x)∃ italic\_x . italic\_φ ( italic\_x ). The agent proposes a behavior x𝑥xitalic\_x and the user evaluates the quality of this behavior. For simplicity of this analogy, let us assume that the user’s evaluation is binary so that it can be captured by the predicate φ𝜑\varphiitalic\_φ. With recursive reward modeling we can solve tasks that are analogous to more complicated first-order logic formulas that involve alternating quantifiers. For example, ∃x∀y.φ(x,y)formulae-sequence𝑥for-all𝑦𝜑𝑥𝑦\exists x\forall y.\,\varphi(x,y)∃ italic\_x ∀ italic\_y . italic\_φ ( italic\_x , italic\_y ) corresponds to the next level of the recursion: agent A2subscript𝐴2A\_{2}italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT proposes a behavior x𝑥xitalic\_x and agent A1subscript𝐴1A\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT responds with an assisting behavior y𝑦yitalic\_y. The user then evaluates the assistance y𝑦yitalic\_y with respect to x𝑥xitalic\_x (training agent A1subscript𝐴1A\_{1}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT) and the outcome x𝑥xitalic\_x with help of the assistance y𝑦yitalic\_y (training agent A2subscript𝐴2A\_{2}italic\_A start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT). At recursion depth k𝑘kitalic\_k increases, we can target problems that involve k𝑘kitalic\_k alternating quantifiers. When using polynomially bounded quantifiers and a formula φ𝜑\varphiitalic\_φ that can be evaluated in polynomial time, reward modeling is analogous to solving NP-complete problems: a nondeterministic execution (analogous to the agent) proposes a solution which can be ‘evaluated’ for correctness in deterministic polynomial time (by the user). For example, finding a round trip in a given graph that visits every vertex exactly once (the Hamiltonian cycle problem) is NP-complete (Karp, [1972](#bib.bib93)): it can take exponential time in the worst case with known algorithms to find a cycle, but given a cycle it can be verified quickly that every vertex is visited exactly once. This analogy to complexity theory, first introduced by Irving et al. ([2018](#bib.bib91)), provides two important insights: 1. 1. It is widely believed that the complexity classes P and NP are not equal, which supports [Assumption 2](#Thmassumption2 "Assumption 2 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") that for a lot of relevant problems evaluation is easier than producing solutions. 2. 2. Basically every formal statement that mathematicians care about can be written as a first-order logic statement with a finite number of alternating quantifiers. This suggests that recursive reward modeling can cover a very general space of tasks. 4 Challenges ------------- | | Challenges | | --- | --- | | 1 | Amount of feedback | | 2 | Feedback distribution | | 3 | Reward hacking | | 4 | Unacceptable outcomes | | 5 | Reward-result gap | | | | | --- | --- | | Approaches | | | online feedback | 1, 2, 3 | | off-policy feedback | 3, 4 | | leveraging existing data | 1 | | hierarchical feedback | 1 | | natural language | 1, 2 | | model-based RL | 3, 4 | | side-constraints | 3, 4 | | adversarial training | 3, 4, 5 | | uncertainty estimates | 1, 2, 5 | | inductive bias | 1, 2, 5 | Figure 3: Challenges when scaling reward modeling and the approaches we discuss to address them. The rightmost column lists which challenge each approach is meant to address. The success of reward modeling relies heavily on the quality of the reward model. If the reward model only captures most aspects of the objective but not all of it, this can lead the agent to find undesirable degenerate solutions (Amodei et al., [2016](#bib.bib8); Lehman et al., [2018](#bib.bib111); Ibarz et al., [2018](#bib.bib89)). In other words, the agent’s behavior depends on the reward model in a way that is potentially very fragile. Scaling reward modeling to harder and more complex tasks gives rise to a number of other challenges as well: is the amount of feedback required to learn the correct reward function affordable? Can we learn a reward function that is robust to a shift in the state distribution? Can we prevent the agent from finding loopholes in the reward model? How do we prevent unacceptable outcomes before they occur? And even if the reward model is correct, how can we train the agent to robustly produce behavior incentivized by the reward model? Each of these challenges can potentially prevent us from scaling reward modeling. In the rest of this section, we elaborate on these challenges in more detail. We do not claim that this list of challenges is exhaustive, but hopefully it includes the most important ones. [Section 5](#S5 "5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction") discusses concrete approaches to mitigating these challenges; see [Figure 3](#S4.F3 "Figure 3 ‣ 4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction") for an overview. *The goal of the research direction we advocate is to investigate these approaches in order to understand whether and how they can overcome these challenges.* ### 4.1 Amount of feedback In the limit of infinite data from the right distribution, we can learn the correct reward function with enough model capacity (in the extreme case using a lookup table). However, a crucial question is whether we can attain sufficient accuracy of the reward model with an amount of data that we can produce or label within a realistic budget. Ultimately this is a question of how well generalization works on the state distribution: the better our models generalize, the more we can squeeze out of the data we have. It is possible that the agent alignment problem is actually easier for agents that have learned to be effective at sufficiently broad real world tasks if doing so requires learning high-level concepts that are highly related to user intentions that we want to teach (e.g. theory of mind, cooperation, fairness, self-models, etc.). If this is true, then the amount of effort to communicate an aligned reward function relative to these concepts could be much smaller than learning them from scratch. On the other hand, agents which do not share human inductive biases may solve tasks in surprising or undesirable ways, as the existence of adversarial examples (Szegedy et al., [2013](#bib.bib163)) demonstrates. This suggests that aligning an agent may require more than just a large quantity of labeled data; we may also need to provide our models with the the right inductive bias. ### 4.2 Feedback distribution Machine learning models typically only provide meaningful predictions on inputs that come from the same distribution that they were trained on. However, we would like a reward model that is accurate *off-policy*, on states the agent has never visited. This is crucial (1) to encourage the agent to explore positive value trajectories it has not visited and (2) to discourage the agent from exploring negative value trajectories that are undesirable. This problem is called *distributional shift* or *dataset shift* (Candela et al., [2009](#bib.bib39)). This distributional shift problem also applies to the agent’s policy model; a change in the observation distribution could make the policy output useless. However, this problem is more severe for the reward model and in some cases the policy can be recovered with finetuning if the reward model is still intact (Bahdanau et al., [2018](#bib.bib26)). It is unclear what a principled solution to this problem would be. In the absence of such a solution we could rely on out-of-distribution detection to be able to defer to a human operator or widening the training distribution to encompass all relevant cases (Tobin et al., [2017](#bib.bib166)). ### 4.3 Reward hacking Reward hacking222Reward hacking has also been called reward corruption by Everitt et al. ([2017](#bib.bib58)). is an effect that lets the agent get more reward than intended by exploiting loopholes in the process determining the reward (Amodei et al., [2016](#bib.bib8); Everitt et al., [2017](#bib.bib58)). This problem is difficult because these loopholes have to be delineated from desired creative solutions like AlphaGo’s move 37 (Metz, [2016](#bib.bib120)). Sources of undesired loopholes are *reward gaming* (Leike et al., [2017](#bib.bib114)) where the agent exploits some misspecification in the reward function, and *reward tampering* (Everitt & Hutter, [2018](#bib.bib57)) where the agent interferes with the process computing the reward. ![Refer to caption](/html/1811.07871/assets/x1.png) Figure 4: An example of gaming the reward model in Atari games. The fully trained reward model from the best seed is frozen and used to train an new agent from scratch. The plot shows the average true episode return according to the Atari reward (black) and average episode return according to the frozen reward model (green) during training. Over time the agent learns to exploit the reward model: the *perceived* performance (according to the reward model) increases, while the actual performance (according to the game score) plummets. Reproduced from Ibarz et al. ([2018](#bib.bib89)). ##### Reward gaming Opportunities for reward gaming arise when the reward function incorrectly provides high reward to some undesired behavior (Clark & Amodei, [2016](#bib.bib43); Lehman et al., [2018](#bib.bib111)); see [Figure 4](#S4.F4 "Figure 4 ‣ 4.3 Reward hacking ‣ 4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction") for a concrete example. One potential source for reward gaming is the reward model’s vulnerability to adversarial inputs (Szegedy et al., [2013](#bib.bib163)). If the environment is complex enough, the agent might figure out how to specifically craft these adversarially perturbed inputs in order to trick the reward model into providing higher reward than the user intends. Unlike in most work on generating adversarial examples (Goodfellow et al., [2015](#bib.bib70); Huang et al., [2017](#bib.bib87)), the agent would not necessarily be free to synthesize any possible input to the reward model, but would need to find a way to realize adversarial observation sequences in its environment. Reward gaming problems are in principle solvable by improving the reward model. Whether this means that reward gaming problems can also be overcome in practice is arguably one of the biggest open questions and possibly the greatest weakness of reward modeling. Yet there are a few examples from the literature indicating that reward gaming can be avoided in practice. Reinforcement learning from a learned reward function has been successful in gridworlds (Bahdanau et al., [2018](#bib.bib26)), Atari games (Christiano et al., [2017](#bib.bib41); Ibarz et al., [2018](#bib.bib89)), and continuous motor control tasks (Ho & Ermon, [2016](#bib.bib85); Christiano et al., [2017](#bib.bib41)). ##### Reward tampering Reward tampering problems can be categorized according to what part of the reward process is being interfered with (Everitt & Hutter, [2018](#bib.bib57)). Crucial components of the reward process that the agent might interfere with include the feedback for the reward model (Armstrong, [2015](#bib.bib17); Everitt & Hutter, [2018](#bib.bib57)), the observation the reward model uses to determine the current reward (Ring & Orseau, [2011](#bib.bib139)), the code that implements the reward model, and the machine register holding the reward signal. For example, Super Mario World allows the agent to execute arbitrary code from inside the game (Masterjun, [2014](#bib.bib119)), theoretically allowing an agent to directly program a higher score for itself. Existing examples of tampering like this one are somewhat contrived and this may or may not be a problem in practice depending how carefully we follow good software design principles (e.g. to avoid buffer overflows). In contrast to reward gaming discussed above, reward tampering bypasses or changes the reward model. This might require a different set of solutions; rather than increasing the accuracy of the reward model, we might have to strengthen the integrity of the software and hardware of the reward model, as well as the feedback training it. ### 4.4 Unacceptable outcomes Currently, most research in deep reinforcement learning is done in simulation where unacceptable outcomes do not exist; in the worst case the simulation program can be terminated and restarted from an initial state. However, when training a reinforcement learning agent on any real-world task, there are many outcomes that are so costly that the agent needs to avoid them altogether. For example, there are emails that a personal assistant should never write; a physical robot could take actions that break its own hardware or injure a nearby human; a cooking robot may use poisonous ingredients; and so on. Avoiding unacceptable outcomes has two difficult aspects. First, for complex tasks there are always parts of the environment that are unknown and the agent needs to explore them safely (García & Fernández, [2015](#bib.bib65)). Importantly, the agent needs to learn about unsafe states without visiting them. Second, the agent needs to react robustly to perturbations that may cause it to produce unacceptable outcomes unintentionally (Ortega et al., [2018](#bib.bib133)) such as distributional changes and adversarial inputs (Szegedy et al., [2013](#bib.bib163); Huang et al., [2017](#bib.bib87)). ### 4.5 Reward-result gap The reward-result gap is exhibited by a difference between the reward model and the reward function that is recovered with perfect inverse reinforcement learning (Ng & Russell, [2000](#bib.bib126)) from the agent’s policy (the reward function the agent *seems* to be optimizing). Even if we supply the agent with a correctly aligned reward function, the resulting behavior might still be unaligned because the agent may fail to converge to an optimal policy: even provably Bayes-optimal agents may fail to converge to the optimal policy due to a lack of exploration (Orseau, [2013](#bib.bib130); Leike & Hutter, [2015](#bib.bib112)). Reasons for the reward-result gap are plentiful: rewards might be too sparse, poorly shaped, or of the wrong order of magnitude; training may stall prematurely due to bad hyperparameter settings; the agent may explore insufficiently or produce unintended behavior during its learning process; the agent may face various *robustness problems* (Leike et al., [2017](#bib.bib114); Ortega et al., [2018](#bib.bib133)) such as an externally caused change in the state space distribution or face inputs crafted by an adversary (Huang et al., [2017](#bib.bib87)). Depending on the nature of the reward-result gap, the reward model might need to be tailored to the agent’s specific shortcomings (e.g. be shaped away from unsafe states) rather than just purely capturing the human’s intentions. 5 Approaches ------------- This section discusses a number of approaches that collectively may help to mitigate the problems discussed in [Section 4](#S4 "4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction"). These approaches should be thought of as directions to explore; more research is needed to figure out whether they are fruitful. ### 5.1 Online feedback Preliminary experiments show failure modes when the reward model is not trained *online*, i.e. in parallel with the agent (Christiano et al., [2017](#bib.bib41); Ibarz et al., [2018](#bib.bib89)). In these cases the agent learns to exploit reward models that are frozen. Because there is no additional user feedback, loopholes in the reward model that the agent discovers cannot be corrected. If we provide the agent with reward feedback online, we get a tighter feedback loop between the user’s feedback and the agent’s behavior. This allows the reward model to be adapted to the state distribution the agent is visiting, mitigating some distributional shift problems. Moreover, with online feedback the user can spot attempts to hack the reward model and correct them accordingly. Ideally, we would like the agent to share some responsibility for determining when feedback is needed, for instance based on uncertainty estimates ([Section 5.9](#S5.SS9 "5.9 Uncertainty estimates ‣ 5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction")), since otherwise providing relevant feedback in a timely manner could be prohibitively expensive. ### 5.2 Off-policy feedback When training the agent with feedback on its behavior, this feedback is only reactive, based on outcomes that have already occurred. To prevent unacceptable outcomes and reward hacking, we need to be able to communicate that certain outcomes are undesirable *before they occur*. This requires the reward model to be accurate *off-policy*, i.e. on states the agent has never visited (Everitt et al., [2017](#bib.bib58)). If off-policy feedback is used in conjunction with model-based RL ([Section 5.6](#S5.SS6 "5.6 Model-based RL ‣ 5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction")), the agent can successfully avoid unsafe behavior that has never occurred. The user could proactively provide off-policy feedback in anticipation of potential pitfalls (Abel et al., [2017](#bib.bib2)). Off-policy feedback could be elicited by using a generative model of the environment to create hypothetical scenarios of counterfactual events. However, generative modelling of states the agent has never visited might be very difficult because of the incurred distributional shift; the resulting videos might miss important details or be incomprehensible to humans altogether. Therefore it might be more feasible to provide off-policy feedback on an abstract level, for example using natural language (Yeh et al., [2018](#bib.bib180)). This is analogous to how humans can learn about bad outcomes through story-telling and imagination (Riedl & Harrison, [2016](#bib.bib138)). ### 5.3 Leveraging existing data A large volume of human-created video data and prose is already readily available. Most of this data currently does not have high-quality text annotations and thus cannot be directly used as reward labels. Nevertheless, it contains a lot of useful information about human intentions (Riedl & Harrison, [2016](#bib.bib138)). There are at least two approaches to leverage this existing data: using unsupervised learning (such as unsupervised pretraining or third-person imitation learning; Stadie et al., [2017](#bib.bib159)) or by manually annotating it.333For example, the total length of all movies on the Internet Movie Database longer than 40min is about 500,000 hours (Peter, [2014](#bib.bib135)). Assuming a 10x overhead and $10 per hour, this data would cost ca. $50 million to annotate. ### 5.4 Hierarchical feedback The same arguments that support hierarchical RL (Dayan & Hinton, [1993](#bib.bib44); Sutton et al., [1999](#bib.bib162); Vezhnevets et al., [2017](#bib.bib171)) also encourage having a hierarchical decomposition of the reward model. This would allow the user to provide both low-level and high-level feedback. Both hierarchical RL and hierarchical reward models should be quite natural to combine: if the temporal hierarchies between agent and reward model align, then at each level of the hierarchy the reward model can train the corresponding level of the agent. This might help bypass some very difficult long-term credit assignment problems. For example, recall the fantasy author task from [Section 3.2](#S3.SS2 "3.2 Recursive reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction"). The low-level feedback would include spelling, fluency, and tone of language while high-level feedback could target plot and character development that cannot be provided on a paragraph level. ### 5.5 Natural language Since we want agents to be able to pursue and achieve a wide variety of goals in the same environment and be able to specify them in a way that is natural to humans, we could model the reward function as conditioned on natural language instructions (Bahdanau et al., [2018](#bib.bib26)). These natural language instructions can be viewed as human-readable task labels. Moreover, they provide a separate privileged channel that should be easier to protect and harder to spoof than any instructions that are received through the observation channel. In addition to providing task labels, we could also make natural language a more central part of the agent’s architecture and training procedure. This has a number of advantages. 1. 1. Natural language is a natural form of feedback for humans. If we can learn to translate natural language utterances into the rigid format required for the data set the reward model is trained on, this would allow users to give feedback much more efficiently. 2. 2. Natural language has the potential to achieve better generalization if the latent space is represented using language (Andreas et al., [2018](#bib.bib10)) and possibly generalize in a way that is more predictable to humans. This might also help to mitigate distributional problems for the reward model ([Section 4.2](#S4.SS2 "4.2 Feedback distribution ‣ 4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction")): if the training distribution is reasonably dense in the space of natural language paragraphs, this might make out-of-distribution inputs very rare. 3. 3. Natural language might lead to substantially better interpretability. Especially for abstract high-level concepts, natural language might be much better suited than visual interpretability techniques (Olah et al., [2018](#bib.bib128)). However, by default the reward model’s representations might not correspond neatly with short natural language expressions and will probably need to be trained particularly for this target (without producing rationalizations). ### 5.6 Model-based RL A *model-based* RL agent learns an explicit model of the environment which it can use with a planning algorithm such as Monte Carlo tree search (Abramson, [1987](#bib.bib3); Kocsis & Szepesvári, [2006](#bib.bib100)). If we are training a model-based agent, the reward model can be part of the search process at planning time. This allows the agent to use *off-policy* reward estimates, estimated for actions it never actually takes, provided that the reward model is accurate off-policy ([Section 5.2](#S5.SS2 "5.2 Off-policy feedback ‣ 5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction")). This has a number of advantages: 1. 1. The agent can avoid unacceptable outcomes ([Section 4.4](#S4.SS4 "4.4 Unacceptable outcomes ‣ 4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction")) by discovering them during planning. 2. 2. The agent’s model could be used to solicit feedback from the user for outcomes that have not yet occured. 3. 3. The agent can adapt to changes in the reward model more quickly because it can backup these changes to value estimates using the model without interaction with the environment. 4. 4. Model-based approaches enable principled solutions to the reward tampering problem ([Section 4.3](#S4.SS3 "4.3 Reward hacking ‣ 4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction")) by evaluating future outcomes with the current reward model during planning (Everitt, [2018](#bib.bib56), Part II). Agents that plan this way have no incentive to change their reward functions (Schmidhuber, [2007](#bib.bib146); Omohundro, [2008](#bib.bib129)), nor manipulate the register holding the reward signal (Everitt, [2018](#bib.bib56), Sec. 6.3). ### 5.7 Side-constraints In addition to learning a reward function, we could also learn side-constraints for low-level or high-level actions (*options*; Sutton et al., [1999](#bib.bib162)) to prevent unacceptable outcomes. Blocking actions can be more effective than discouraging them with large negative reward since negative rewards could be compensated by larger rewards later (such as in the case of reward hacking). This problem could be amplified by errors in the agent’s model of the world. The same techniques described here for training a reward model should apply to train a model that estimates side-constraints and blocks low-level actions (Saunders et al., [2018](#bib.bib145)) or enforces constraints during policy updates (Achiam et al., [2017](#bib.bib4)). The main downside of this technique is that it puts additional burden on the human because they have to understand which actions can lead to unacceptable outcomes. Depending on the domain, this might require the human to be assisted by other agents. These agents could in turn be trained using recursive reward modeling ([Section 3.2](#S3.SS2 "3.2 Recursive reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction")). ### 5.8 Adversarial training To mitigate the effect of adversarially crafted inputs to neural networks (Szegedy et al., [2013](#bib.bib163)), so far the empirically most effective strategy has been *adversarial training*: training the model explicitly on adversarially perturbed inputs (Madry et al., [2017](#bib.bib117); Uesato et al., [2018](#bib.bib169); Athalye et al., [2018](#bib.bib23)). However, it is unclear how to define adversarial perturbation rigorously in a general way (Brown et al., [2018](#bib.bib37); Gilmer et al., [2018](#bib.bib68)). To cover more general cases, we could train agents to explicitly discover weaknesses in the reward model and opportunities for reward hacking as well as the minimal perturbation that leads to an unacceptable outcome (Anonymous, [2019c](#bib.bib13)). This is analogous to *red teams*, teams whose objective is to find attack strategies (e.g. security vulnerabilities) that an adversary might use (Mulvaney, [2012](#bib.bib124)). The discovered failure cases can then be reviewed by the user and added to the feedback dataset. This might mean higher data requirements; so even if adversarial training fixes the problem, it might push the data requirements beyond affordable limits. ### 5.9 Uncertainty estimates Another desirable feature of the reward model is an appropriate expression of uncertainty regarding its outputs. Improving uncertainty estimates brings two benefits: 1. 1. During training, it can help automate the process of soliciting feedback about the most informative states (Krueger et al., [2016](#bib.bib104); Schulze & Evans, [2018](#bib.bib149)) using active learning (Settles, [2012](#bib.bib150)). 2. 2. The agent can defer to the human or fall back to risk-averse decision making when uncertainty is large, for instance on inputs that do not resemble the training distribution (Hendrycks & Gimpel, [2017](#bib.bib81)). A number of recent works develop scaleable approximate Bayesian methods for neural networks, beginning with Graves ([2011](#bib.bib73)), Blundell et al. ([2015](#bib.bib34)), Kingma et al. ([2015](#bib.bib97)), Hernández-Lobato & Adams ([2015](#bib.bib82)), and Gal & Ghahramani ([2016](#bib.bib62)). So far model ensembles provide a very strong baseline (Lakshminarayanan et al., [2017](#bib.bib107)). Bayesian methods untangle irreducible uncertainty from ‘epistemic’ uncertainty about which parameters are correct, which decreases with the amount of data (Kendall & Gal, [2017](#bib.bib95)); this distinction can help with active learning (Gal et al., [2017b](#bib.bib64)). Other works aim to calibrate the predictions of neural networks (Guo et al., [2017](#bib.bib77)), so that their subjective uncertainty corresponds with their empirical frequency of mistakes. While Bayesian methods can help with calibration (Gal et al., [2017a](#bib.bib63)), they are insufficient in practice for deep neural networks (Kuleshov et al., [2018](#bib.bib105)). Well-calibrated models could engage risk-averse decision making, but handling out-of-distribution states reliably would require higher quality uncertainty estimates than current deep learning techniques can provide (Shafaei et al., [2018](#bib.bib151)). ### 5.10 Inductive bias Finally, a crucial aspect of reward modeling is the inductive bias of the reward model. Since we cannot train the reward model and the agent on all possible outcomes, we need it to generalize appropriately from the given data (Zhang et al., [2017](#bib.bib183), [2018a](#bib.bib184)). The success of deep learning has been attributed to inductive biases such as distributed representations and compositionality, which may also be necessary in order to defeat the ‘curse of dimensionality’ (Bengio et al., [2013](#bib.bib33)). Yet further inductive biases are necessary to solve many tasks; for instance, convolutional neural networks (LeCun et al., [1990](#bib.bib108)) vastly outperform multilayer perceptrons in computer vision applications because of their spatial invariance. Solving reward modeling may require non-standard inductive biases; for instance modern deep networks typically use piece-wise linear activation functions (Nair & Hinton, [2010](#bib.bib125); Glorot et al., [2011](#bib.bib69); Goodfellow et al., [2013](#bib.bib72); Xu et al., [2015](#bib.bib178)), which generalize linearly far from training data (Goodfellow et al., [2015](#bib.bib70)), meaning estimated reward would go to positive or negative infinity for extreme inputs. The inductive bias of deep models can be influenced by the architecture, activation functions, and training procedure. A growing body of work targets *systematic generalization* in deep models. Examples include modularity (Anonymous, [2019b](#bib.bib12)), recursion (Cai et al., [2017](#bib.bib38)), graph structure (Battaglia et al., [2018](#bib.bib31)) or natural language (Andreas et al., [2018](#bib.bib10)) in the latent space, differentiable external memory (Graves et al., [2016](#bib.bib74)), or neural units designed to perform arbitrary arithmetic operations (Trask et al., [2018](#bib.bib167)). 6 Establishing trust --------------------- Suppose our research direction is successful and we figure out how to train agents to behave in accordance with user intentions. How can we be confident that the agent we are training is indeed sufficiently aligned? In other words, how can we be confident that we have overcome the challenges from [Section 4](#S4 "4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction") and that the agent’s behavior sufficiently captures human intentions? This requires additional techniques that allow us to gain *trust* in the agents we are training. An ambitious goal is to enable the production of *safety certificates*, artifacts that serve as evidence to convince a third party to trust our system. These safety certificates could be used to prove responsible technology development, defuse competition, and demonstrate compliance with regulations. A safety certificate could take the form of a score on a secret test suite held by a third party, evidence of interpretability properties, or a machine-checkable formal proof of correctness with respect to some established specification, among others. A few general approaches for building trust in our models are discussed below. ![Refer to caption](/html/1811.07871/assets/x2.png) Figure 5: Alignment of learned reward functions in 9 Atari games: Scatterplot showing the correlation of the reward learned from user preferences (y-axis) with the true Atari reward (x-axis) averaged over 1000 timesteps. For a fully aligned reward function, all points would be on a straight line. In these experiments the reward model is well-aligned in some games like Beamrider, Hero, and Q\*bert, and poorly aligned in others like Private Eye, Breakout, and Mondezuma’s Revenge. Reproduced from Ibarz et al. ([2018](#bib.bib89)). ##### Design choices. Separating learning the objective from learning the behavior allows us to achieve higher confidence in the resulting behavior because we can split trust in the reward model from trust in the policy. For example, we can measure how well the reward function aligns with the task objective by evaluating it on the user’s feedback (see [Figure 5](#S6.F5 "Figure 5 ‣ 6 Establishing trust ‣ Scalable agent alignment via reward modeling: a research direction")). If we understand and trust the reward model, we know what the agent is ‘trying’ to accomplish. If [Assumption 2](#Thmassumption2 "Assumption 2 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") is true, then the reward model should be easier to interpret and debug than the policy. Another design choice that could increase trust in the system is to split our policy into two parts: a *plan generator* and a *plan executor*. The plan generator produces a human-readable plan of the current course of action. This plan could be very high-level like a business plan or a research proposal, or fairly low-level like a cooking recipe. This plan can then optionally be reviewed and signed off by the user. The plan executor then takes the plan and implements it. Clean, well-understood design choices on training setup, model architecture, loss function, and so on can lead to more predictable behavior and thus increase our overall confidence in the resulting system (as opposed to e.g. training a big blob of parameters end-to-end). Especially if we manage to formally specify certain safety properties (Orseau & Armstrong, [2016](#bib.bib131); Krakovna et al., [2018](#bib.bib101)), we can then make them an explicit part of our agent design. ##### Testing. Evaluation on a separate held-out test set is already common practice in machine learning. For supervised learning, the performance of a trained model is estimated by the empirical risk on a held-out test set which is drawn from the same data distribution. This practise can readily be applied to reward model (Ibarz et al., [2018](#bib.bib89)) and policy, e.g. on a set of specifically designed simulated environments (Leike et al., [2017](#bib.bib114)) or even adversarially where an attacker explicitly tries to cause misbehavior in the agent (Anonymous, [2019c](#bib.bib13)). ##### Interpretability. Interpretability has been defined as the ability to explain or to present in understandable terms to a human (Doshi-Velez & Kim, [2017](#bib.bib47)). Currently widely used deep neural networks are mostly black boxes, and understanding their internal functionality is considered very difficult. Nevertheless, recent progress provides reason for optimism that we will be able to make these black boxes increasingly transparent. This includes preliminary work on visualizing the latent state space of agents using t-SNE plots (Zahavy et al., [2016](#bib.bib182); Jaderberg et al., [2018](#bib.bib92)), examining what agents attend to when they make decisions (Greydanus et al., [2018](#bib.bib75)), evaluating models’ sensitivity to the presence/intensity of high-level human concepts (Kim et al., [2017](#bib.bib96)), optimizing a model to be more interpretable with humans in the loop (Lage et al., [2018](#bib.bib106)), translating neural activations into natural language on tasks also performed by humans (Andreas et al., [2017](#bib.bib9)), and combining different interactive visualization techniques (Olah et al., [2018](#bib.bib128)), to name only a few. ##### Formal verification. Recent progress on model checking for neural networks opens the door for formal verification of trained models (Katz et al., [2017](#bib.bib94)). The size of verified models has been pushed beyond MNIST-size to over a million parameters (Dvijotham et al., [2018b](#bib.bib49); Wong et al., [2018](#bib.bib177)), which indicates that verifying practically sized RL models might soon be within reach. If formal verification can be scaled, we could attempt to verify properties of policies (Bastani et al., [2018](#bib.bib30)) and reward functions with respect to a high-level specification, including off-switches, side-effects, and others mentioned in [Section 3.1](#S3.SS1 "3.1 Reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction"). If [Assumption 1](#Thmassumption1 "Assumption 1 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") from [Section 1](#S1 "1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") is true, then this specification does not have to be manually written, but instead can be provided by a separately learned model. However, in this case a formal correctness proof is only as useful as this learned specification is accurate. To make the verification task easier, our models could be trained to be more easily verifiable (Dvijotham et al., [2018a](#bib.bib48)). However, this opens the door for degenerate solutions that exploit loopholes in the learned specification. This is analogous to problems with reward hacking ([Section 4.3](#S4.SS3 "4.3 Reward hacking ‣ 4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction")) which train a policy to optimize a frozen reward model ([Figure 4](#S4.F4 "Figure 4 ‣ 4.3 Reward hacking ‣ 4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction")). Circumventing this problem could be done using the same techniques that have been successful for reward hacking, such as learning the specification online using user feedback ([Section 5.1](#S5.SS1 "5.1 Online feedback ‣ 5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction")). ##### Theoretical guarantees. Finally, even more ambitious would be the development of theoretically well-founded scalable learning algorithms that come with *probably approximately correct* (Dziugaite & Roy, [2017](#bib.bib50)) or *sample complexity* guarantees, capacity statements, well-calibrated uncertainty estimates, etc. (Veness et al., [2017](#bib.bib170)). Unfortunately, currently there is a dire lack of any such guarantees for the popular deep neural network architectures and training techniques. 7 Alternatives for agent alignment ----------------------------------- The research direction we outline in this paper is not the only possible path to solve the agent alignment problem. While we believe it is currently the most promising one to explore, it is not guaranteed to succeed. Fortunately there are a number of other promising directions for agent alignment. These can be pursued in parallel or even combined with each other. This section provides an overview and explains how our approach relates to them. Our list is not exhaustive; more directions are likely to be proposed in the future. ### 7.1 Imitation learning One strategy to train aligned agents could be from imitating human behavior (Pomerleau, [1991](#bib.bib136); Abbeel & Ng, [2004](#bib.bib1); Ho & Ermon, [2016](#bib.bib85); Finn et al., [2016](#bib.bib60)). An agent imitating aligned human behavior sufficiently well should be aligned as well. The following caveats apply: * • *Amount of data*. While feedback can often be provided by non-experts, the data for human imitation has to be provided by experts on the task. This might be much more expensive data and it is not clear if we need more or less than for reward modeling. * • *Cognitive imitation.* It is possible that a lot of cognitively demanding tasks that humans do rely on very high-level intuition, planning, and other cognitive processes that are poorly reflected in human actions. For example, a crucial insight for solving a problem might be gained from drawing an analogy with a different problem encountered in a different domain. This might be hard to replicate and predict from data about human actions alone. * • *Generalization.* In order to be useful, our agent trained with imitation learning needs to showcase persistently high-quality behavior, even in the face of novel situations. Analogous to [Assumption 2](#Thmassumption2 "Assumption 2 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction"), generalizing learned reward functions might be easier than generalizing behavior (Bahdanau et al., [2018](#bib.bib26)). * • *Performance.* It is generally difficult to outperform humans using imitation learning alone (Hester et al., [2018](#bib.bib83)): even a perfect imitator can only perform as well as the source it is imitating; superhuman performance typically comes from executing human action sequences faster and more reliably by smoothing out inconsistencies in human behavior (Aytar et al., [2018](#bib.bib24)). Therefore imitation learning is unlikely to be competitive with other strategies to train agents in the longer term. However, it might be sufficient to act as a ‘stepping stone’: agents trained with imitation learning might act as ‘research assistants’ and help scale up other alignment efforts. Therefore it should be considered as a strong alternative to our research strategy. ### 7.2 Inverse reinforcement learning We can view a reinforcement learning algorithm as a mapping from a reward function to behavior. The inverse of that mapping takes agent behavior as input and produces a reward function; this is known as *inverse reinforcement learning* (IRL; Russell, [1998](#bib.bib141); Ng & Russell, [2000](#bib.bib126)). In this sense, inverse reinforcement learning can be viewed as one approach to reward modeling that takes feedback in the form of trajectories of behavior. However, taken as it is, it had two shortcomings: 1. 1. IRL is an under-constrained problem because the reward function is not uniquely identifiable (not even up to affine-linear transformation) from behavior alone (Ng & Russell, [2000](#bib.bib126)); for example, R=0𝑅0R=0italic\_R = 0 is always a solution. If we assume the human is fully rational and the agent can design a sequence of tasks for the human, then the reward function can be identified (Amin et al., [2017](#bib.bib7)). Even some assumptions about the human’s rationality can be relaxed (Evans et al., [2016](#bib.bib54)), but in full generality the inverse reinforcement learning problem becomes impossible to solve (Armstrong & Mindermann, [2018](#bib.bib19)). 2. 2. It assumes the human is acting to optimize their reward directly, even when this is an inefficient way of communicating their preferences. For instance, it is much easier for a human to state ‘I would like you to make me coffee every morning at 8am’ than it is for the human to make themselves coffee at 8am several days in a row. ### 7.3 Cooperative inverse reinforcement learning Motivated by this second shortcoming of IRL, Hadfield-Menell et al. ([2016](#bib.bib78)) propose *cooperative inverse reinforcement learning* (CIRL). CIRL is a formal model of reward modeling as a two player game between a user and an agent which proceeds as follows. 1. 1. The user and the agent begin with a shared prior over the user’s reward function, 2. 2. the user then observes their reward function, and finally 3. 3. both user and agent execute policies to optimize the user’s reward function. An optimal solution to a CIRL game would use the common knowledge of the user and the agent to compute a policy for the agent (to be executed in step 3), and a mapping from reward function to policy for the user. Then upon observing their reward function in step 2, the user should select the corresponding policy for them to execute in step 3. Both the user and the agent have to choose behaviors which trade off between (1) communicating the user’s reward function to the agent and (2) directly maximizing the user’s expected reward. We make two observations about CIRL as an approach to agent alignment that highlight that CIRL abstracts away from some important details. First, the performance of a CIRL algorithm will depend on the quality of the prior over reward functions. In essence, CIRL replaces the problem of specifying a reward function with specifying a prior over reward functions. Second, computing the optimal solution to the CIRL problem is not realistic, since we cannot prescribe exactly how the user should interact with the agent. In other words, an efficient solution to a CIRL game might employ a strategy that transmits the parameters from the user to the agent, followed by a normal RL algorithm executed by both the user and the agent (since the reward is now fully observable to both). But if the user were able to observe their reward function, they could just specify this to an RL agent directly. In other words, one of the difficulties of agent alignment is that the reward function is not directly available to the user in the first place: users are usually not very aware of all of their preferences, and it might instead be easier for them to communicate through revealed preferences (Samuelson, [1938](#bib.bib144)). Nevertheless, CIRL incorporates two important insights into the alignment problem that also motivate our research direction: 1. 1. Constructing agents to optimize a *latent* reward function can help align them on tasks where we cannot consistently provide reward feedback about all state-action pairs as the agent is visiting them. 2. 2. A key challenge of the agent alignment problem is finding efficient ways to communicate the user’s intentions to learning agents. ### 7.4 Myopic reinforcement learning Myopic RL agents only maximize reward in the present timestep instead of a (discounted) sum of future rewards. This means that they are more short-sighted and thus not incentivized to execute long-term plans or take actions that are bad in the short-term in order to get a long-term benefit. In particular, myopic RL agents might be less prone to some of the design specification problems mentioned in [Section 3.1](#S3.SS1 "3.1 Reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction"), since causing them might take several time-steps to pay off for the agent. There are two main myopic RL algorithms. TAMER (Knox & Stone, [2009](#bib.bib99); Knox, [2012](#bib.bib98); Warnell et al., [2017](#bib.bib172)) is a collection of algorithms that learn a policy from human value feedback, i.e. take actions that maximize expected feedback in the next step (possibly with short temporal smoothing). COACH (MacGlashan et al., [2017](#bib.bib116); Arumugam et al., [2018](#bib.bib21)) is an algorithm that trains a policy from feedback in the form of an *advantage function* (Sutton & Barto, [2018](#bib.bib161)). In contrast to imitation learning, the user does not have to be able to produce the desired behavior, just be able to reward the individual actions that bring it about. For example, using TAMER or COACH, a user could teach an agent to perform a backflip without being able to do one themself. However, while myopic RL may increase alignment, is also comes with performance drawbacks. Training myopic RL agents puts the burden of solving the credit assignment problem onto the user, limiting the agent’s potential for ingenuity and thus performance, and also leaving the user responsible for avoiding long-term negative consequences. Despite these limits, myopic RL agents might be sufficient for some tasks where credit assignment is reasonably easy for humans. They might also be used as building blocks in more capable training regimes, for instance in iterated amplification (Christiano et al., [2018](#bib.bib42)). ### 7.5 Imitating expert reasoning Another alternative is to train a model to imitate expert reasoning. The imitation can happen at a level of granularity decided by the expert and could include ‘internal’ reasoning steps that the expert would not typically perform explicitly. This expert reasoning can then be improved and accelerated (Christiano et al., [2018](#bib.bib42); Evans et al., [2018](#bib.bib55); Stuhlmüller, [2018](#bib.bib160)). The basic idea is best illustrated with a question answering system. The input to the system is a question Q𝑄Qitalic\_Q and its output an answer A𝐴Aitalic\_A. For simplicity we can treat both Q𝑄Qitalic\_Q and A𝐴Aitalic\_A as natural language strings. The system can call itself recursively by asking subquestions Q1,…,Qksubscript𝑄1…subscript𝑄𝑘Q\_{1},\ldots,Q\_{k}italic\_Q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_Q start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, receiving their answers A1,…,Aksubscript𝐴1…subscript𝐴𝑘A\_{1},\ldots,A\_{k}italic\_A start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_A start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT, and composing them into the answer A𝐴Aitalic\_A. For example, consider the question Q𝑄Qitalic\_Q ‘How many pineapples are there in Denmark?’ To give an approximate answer, we could make a *Fermi estimate* by asking the subquestions ‘What is the population of Denmark?’, ‘How many pineapples does the average Dane consume per year?’, and ‘How long are pineapples stored?’ These subquestions are then answered recursively and their answers can be composed into an answer to the original question Q𝑄Qitalic\_Q. We could train a model to answer questions Q𝑄Qitalic\_Q recursively by using the same reasoning procedure as the expert using imitation learning ([Section 7.1](#S7.SS1 "7.1 Imitation learning ‣ 7 Alternatives for agent alignment ‣ Scalable agent alignment via reward modeling: a research direction")). This model can then be improved using a variety of methods: * • Running many copies of this model in parallel and/or at greater speed. * • Training a new model to predict answers to questions without having to expand the subquestions, akin to using a value network to the estimate the result of a tree search (Anthony et al., [2017](#bib.bib14); Silver et al., [2017](#bib.bib154)). * • Making the expert reasoning more coherent under reflection. For example, by searching for inconsistencies in the expert’s reasoning and resolving them. If we believe expert reasoning is aligned with the user, then we could hope that the resulting improved model is also aligned. This training procedure aims to achieve better interpretability and greater trust in the resulting agents than recursive reward modeling ([Section 3.2](#S3.SS2 "3.2 Recursive reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction")). However, learning expert reasoning might not be economically competitive with recursive reward modeling, depending on how good the expert’s reasoning is and whether [Assumption 2](#Thmassumption2 "Assumption 2 ‣ Assumptions. ‣ 1 Introduction ‣ Scalable agent alignment via reward modeling: a research direction") holds for the task at hand. Even though both are an instance of the more general framework of iterated amplification (Christiano et al., [2018](#bib.bib42)), recursive reward modeling as described in [Section 3.2](#S3.SS2 "3.2 Recursive reward modeling ‣ 3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction") does not try to model expert reasoning explicitly. Instead, recursive reward modeling only requires users to evaluate outcomes. Nevertheless, it relies on decomposition of the evaluation task which has similarities to the decompositional reasoning described here. When using recursive reward modeling users have the *option* to provide feedback on the cognitive process that produced outcomes, but they are not required to do so. Moreover, this feedback might be difficult to provide in practice if the policy model is not very interpretable. ### 7.6 Debate Irving et al. ([2018](#bib.bib91)) describe an idea for agent alignment that involves a two-player zero-sum game in which both players are debating a question for the user. The two players take turns to output a short statement up to a turn limit. At the end of the game the user reads the conversation transcript and declares the player who contributed the most true and useful statements the winner. The debate proposal involves training an agent with self play (Silver et al., [2016](#bib.bib153)) on this debate game. In order to become aligned, this agent needs to be trained in a way that it converges to a Nash equilibrium in which both instances of the agent try to be helpful to the user. The central assumption of debate is that it is easier for the agent to tell the truth than it is to lie. If this assumption holds, then the dynamics of the game should incentivize the agent to provide true and useful statements. The authors provide initial experiments on the MNIST dataset in which the debating agents manage to boost the accuracy of a sparse classifier that only has access to a few of the image’s pixels. While these initial experiments are promising, more research is needed in order to determine whether debate is a scalable alignment approach. We need more empirical evidence to clarify, among others, the following two questions. 1. 1. Does the central assumption of debate hold outside domains of easily fact-checkable statements? 2. 2. Can the humans accurately judge the debate even if the debaters have superior persuasion and deception ability? ### 7.7 Other related work Many of the practical challenges to reward modeling we raise here have already been discussed by Amodei et al. ([2016](#bib.bib8)): safe exploration, distributional shift, side-effects, and reward hacking. In particular, the authors highlight what they call the scalable oversight problem, how to train an RL agent with sparse human feedback. This can be understood as a more narrow version of the alignment problem we are aiming to solve here. In a similar spirit, Taylor et al. ([2016](#bib.bib164)) survey a number of high-level open research questions on agent alignment. Most closely related to our approach are what the authors call informed oversight (building systems that help explain outcomes), generalizable environmental goals (defining objective functions in terms of environment states), and averting instrumental incentives (preventing the system from optimizing for certain undesirable subgoals). Soares & Fallenstein ([2017](#bib.bib156)) outline a research agenda of a very different flavor. Their research problems are quite paradigm-agnostic and instead concern the theoretical foundations of mathematical agent models. In particular, many of their problems aim to address perceived difficulties in applying current notions of optimal behavior to agents which are part of their environment (Orseau & Ring, [2012](#bib.bib132)) and thus may not remain cleanly delineated from it (Demski & Garrabrant, [2018](#bib.bib45)). The authors seek the formal tools to ask questions about or relevant to alignment in theory, such as when provided with a halting oracle (Hutter, [2005](#bib.bib88)). These formal tools could be necessary for formal verification of agents designing upgraded versions of themselves. Yet while there has been some of progress on this research agenda (Barasz et al., [2014](#bib.bib27); Leike et al., [2016](#bib.bib113); Garrabrant et al., [2016](#bib.bib67); Everitt, [2018](#bib.bib56)), some questions turned out to be quite difficult. But even if we had formal solutions to the problems put forth by [Soares & Fallenstein](#bib.bib156), there would still persist a gap to transfer these solutions to align agents in practice. For now, answers to these research questions should be understood more as intuition pumps for practical alignment questions rather than direct solutions themselves (Garrabrant, [2018](#bib.bib66)). See Everitt et al. ([2018](#bib.bib59)) for more in-depth survey and literature review. 8 Discussion ------------- ##### Summary. The version of the agent alignment problem we are aiming to solve involves aligning a single agent to a single user ([Section 2](#S2 "2 The agent alignment problem ‣ Scalable agent alignment via reward modeling: a research direction")). Instead of attempting to learn the entire preference payload, we outline an approach for enabling the user to communicate their intentions to the agent for the task at hand so that it allows them to trust the trained agent. Our research direction for agent alignment is based on scaling reward modeling ([Section 3](#S3 "3 Scaling reward modeling ‣ Scalable agent alignment via reward modeling: a research direction")). This direction fits well into existing efforts in machine learning because it can benefit from advances in the state of the art in supervised learning (for the reward model) and reinforcement learning (for the policy). Building on previous work ([Section 7](#S7 "7 Alternatives for agent alignment ‣ Scalable agent alignment via reward modeling: a research direction")), we provide significantly more detail, including the main challenges ([Section 4](#S4 "4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction")) and concrete approaches to mitigate these challenges ([Section 5](#S5 "5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction")) and to establish trust in the agents we train ([Section 6](#S6 "6 Establishing trust ‣ Scalable agent alignment via reward modeling: a research direction")). In essence, this document combines existing efforts on AI safety problems by providing one coherent narrative around how solving these problems could enable us to train aligned agents beyond human-level performance. ##### Concrete research projects. Our research direction is ‘shovel-ready’ for empirical research today. We can set up experiments with deep reinforcement learning agents: getting empirical data on the severity of the challenges from [Section 4](#S4 "4 Challenges ‣ Scalable agent alignment via reward modeling: a research direction"); prototyping solution ideas from [Section 5](#S5 "5 Approaches ‣ Scalable agent alignment via reward modeling: a research direction"); scaling reward modeling to more difficult tasks; pushing the frontiers on (adversarial) testing, interpretability, formal verification, and the theory of deep RL. Moreover, we can readily use any existing RL benchmarks such as games or simulated environments that come with pre-programmed reward functions: By hiding this reward function from the algorithm we can pretend it is unavailable, but still use it for synthetically generated user feedback (Christiano et al., [2017](#bib.bib41)) as well as the evaluation of the learned reward model (Ibarz et al., [2018](#bib.bib89)). ##### Outlook. There is enormous potential for ML to have a positive impact on the real world and improve human lives. Since most real-world problems can be cast in the RL framework, deep RL is a particularly promising technique for solving real-world problems. However, in order to unlock its potential, we need to train agents in the absence of well-specified reward functions. Just as proactive research into robustness of computer vision systems is essential for addressing vulnerabilities to adversarial inputs, so could alignment research be key to getting ahead of future bottlenecks to the deployment of ML systems in complex real-world domains. For now, agent alignment research is still in its early stages, but we believe that there is substantial reason for optimism. While we expect to face challenges when scaling reward modeling, these challenges are concrete technical problems that we can make progress on with targeted research. #### Acknowledgments This paper has benefited greatly from discussions with many people at DeepMind, OpenAI, and the Future of Humanity Institute. For detailed feedback we are particularly grateful to Paul Christiano, Andreas Stuhlmüller, Ramana Kumar, Laurent Orseau, Edward Grefenstette, Klaus Greff, Shahar Avin, Tegan Maharaj, Victoria Krakovna, Geoffrey Irving, Owain Evans, Andrew Trask, Iason Gabriel, Elizabeth Barnes, Miles Brundage, Alex Zhu, Vlad Firoiu, Serkan Cabi, Richard Ngo, Jonathan Uesato, Tim Genewein, Nick Bostrom, Dario Amodei, Felix Hill, Tom McGrath, Borja Ibarz, Reimar Leike, Pushmeet Kohli, Greg Wayne, Timothy Lillicrap, Chad Burns, Teddy Collins, Adam Cain, Jelena Luketina, Eric Drexler, Toby Ord, Zac Kenton, and Pedro Ortega.
851add98-c397-495c-beaa-6a495a95c9d4
trentmkelly/LessWrong-43k
LessWrong
How do you deal w/ Super Stimuli? From neel.fun I remember watching Youtube videos and thinking "This is the last video, I will quit after this". However, as soon as the video ends, my preferences would suddenly change to wanting to do one more!  Many of us understand this false dichotomy: 1. Quit mid-way through the video 2. Quit after the video ends at 3 am  So I would (sometimes) stop binging videos, but only if I quit mid-way through. Then short-form videos wrecked me. These (tik-toks/shorts/reels) have NO "middle of the video" to have a moment of reflection. I'm constantly in that "high preference for the next hit" state. The algorithms are optimized against me and they're only getting worse! From social dark matter, if it's taboo to admit to some "problem", then you won't hear about how many people have that "problem"[1]  My shoddy estimate[2] tells me that: - 1/2 people reading this "waste" 1.5 hrs/day on unendorsed hyper-stimuli - 1/4 waste 3+ hrs/day - 1/5 waste 4+ hrs/day - 1/10 waste 5+ hrs/day And it's embarassing to talk about! I'm only writing this post because I have been productive/non-binging these past 2 weeks.  Solutions are likely personal, but here's what worked for me. Keeping electronics & chargers outside of bedroom When I waste time, it's usually on my side, in bed w/ my phone up to 3am (or when my phone dies).  Turns out I can simply keep my charger outside my bedroom and hook it up at night. In it's place, I've been reading fun, hard-cover fiction books, like The Martian, which I really enjoyed and look forward to:)[3]  Since I've started associating laying down in bed w/ only going to sleep, I've gotten much better sleep (so unintuitive, right?) You too could move your chargers outside your room, read fun books, and get great sleep!  Seeing a Psychiatrist I think I might have ADD, so I scheduled an appointment with a psychiatrist with that ADHD-specialty advertised. I got an off-label prescription for Wellbutrin, which isn't an ADHD treatment (hence "off
6c0262ab-07f3-4094-a85d-e228a7e7af50
StampyAI/alignment-research-dataset/special_docs
Other
2016 International Symposium on Experimental Robotics [] [] Volume 1 Springer Proceedings in Advanced Robotics Series Editors Bruno Siciliano Università di Napoli Federico II, Napoli, Italy Oussama Khatib Stanford University, Stanford, California, USA Editorial Advisory Board Gianluca Antonelli, University of Cassino, Italy Dieter Fox, University of Washington, USA Kensuke Harada, Osaka University, Japan M. Ani Hsieh, University of Pennsylvania, USA Torsten Kröger, Karlsruhe Institute of Technology, Germany Dana Kulić, University of Waterloo, Canada Jaehung Park, Seoul National University, South Korea More information about this series at http://​www.​springer.​com/​series/​15556 Editors Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture 2016 International Symposium on Experimental Robotics [] Editors Dana Kulić Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada Yoshihiko Nakamura Graduate School of Information Science and Technology, Department of Mechano-Informatics, The University of Tokyo, Tokyo, Japan Oussama Khatib Computer Science Department, Stanford University, Stanford, California, USA Gentiane Venture Department of Mechanical Systems Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan ISSN 2511-1256e-ISSN 2511-1264 Springer Proceedings in Advanced Robotics ISBN 978-3-319-50114-7e-ISBN 978-3-319-50115-4 DOI 10.1007/978-3-319-50115-4 Library of Congress Control Number: 2016960698 © Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Foreword Robots! Robots on Mars and in oceans, in hospitals and homes, in factories and schools; robots fighting fires, making goods and products, saving time and lives. Robots today are making a considerable impact from industrial manufacturing to healthcare, transportation, and exploration of the deep space and sea. Tomorrow, robots will become pervasive and touch upon many aspects of modern life. The Springer Tracts in Advanced Robotics (STAR) was launched in 2002 with the goal of bringing to the research community the latest advances in the robotics field on the basis of their significance and quality. During the latest fifteen years, the STAR series has featured publication of both monographs and edited collections. Among the latter, the proceedings of thematic symposia devoted to excellence in robotics research, such as ISRR, ISER, FSR and WAFR, have been regularly included in STAR. The expansion of our field as well as the emergence of new research areas has motivated us to enlarging the pool of proceedings to be published in STAR in the last few years. This has ultimately led us to launching a sister series in parallel to STAR. The Springer Proceedings in Advanced Robotics (SPAR) is dedicated to the timely dissemination of the latest research results presented in selected symposia and workshops. The inaugural volume of the SPAR series is devoted to the Proceedings of one of the traditional quality symposia in our community, the International Symposium on Experimental Robotics (ISER). The fifteenth edition took place in Tokyo from 3 to 6 October 2016. The volume edited by Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture contains 73 scientific papers organised in 12 chapters. The contributions span a wide coverage ranging from aerial to mobile robots, from grasping to manipulation, from actuation to control, from planning to human–robot interaction. Experimental validation of algorithms, concepts, or techniques is the common thread running through this large research collection. From its warm social interaction to its excellent technical program, ISER culminates with this unique reference on the current developments and new directions of experimental robotics – a genuine tribute to its contributors and organizers! Bruno SicilianoSPAR Editor Naples, Italy November 2016 ISER Proceedings Preface Experimental Robotics XV is the collection of papers presented at the International Symposium on Experimental Robotics, Roppongi, Tokyo, Japan during October 3–6, 2016. Totally 73 scientific papers were selected and presented after peer review. The papers span a broad range of sub-fields in robotics including aerial robots, mobile robots, actuation, grasping, manipulation, planning and control and human–robot interaction, but shared cutting-edge approaches and paradigms to experimental robotics. The readers will find a breadth of new directions of experimental robotics. The International Symposium on Experimental Robotics is a series of biannual symposia sponsored by the International Foundation of Robotics Research, whose goal is to provide a forum dedicated to experimental robotics research. Robotics has been widening its scientific scope, deepening its methodologies and expanding its applications. However, the significance of experiments remains and will remain at the center of the discipline. The ISER gatherings are a venue where scientists can gather and talk about robotics based on this central tenet. ISER2016 was sponsored by the International Foundation of Robotics Research. Financial support was also provided by Tateishi Science and Technology Foundation, the Murata Science Foundation, and Toyota Motor Corporation. We gratefully acknowledge their support. We also thank the reviewers for providing incisive and thoughtful feedback to the authors. Finally, our greatest thanks go to the authors and participants for sharing their ideas, participating in lively discussions and contributing to the symposium. Yoshihiko NakamuraUniversity of Tokyo, General Co-Chair Oussama KhatibStanford University, General Co-Chair Dana KulićUniversity of Waterloo, Program Chair Gentiane VentureTokyo University of Agriculture and Technology, Publicity Chair Contents Aerial Robots 1 Learning Transferable Policies for Monocular Reactive MAV Control 3 Shreyansh Daftry, J. Andrew Bagnell and Martial Hebert A micro-UAS to Start Prescribed Fires 12 Evan Beachly, James Higgins, Christian Laney, Sebastian Elbaum, Carrick Detweiler, Craig Allen and Dirac Twidwell Research on Hammering Test System by Unmanned Aerial Vehicles for Infrastructure Surveillance 25 Masahiko Mizui, Ikuo Yamamoto, Shunsuke Kimura and Masato Maeda Uncertainty Quantification for Small Robots Using Principal Orthogonal Decomposition 33 Konstantinos Karydis and M. Ani Hsieh Collaborative 3D Reconstruction Using Heterogeneous UAVs:​ System and Experiments 43 Timo Hinzmann, Thomas Stastny, Gianpaolo Conte, Patrick Doherty, Piotr Rudol, Marius Wzorek, Enric Galceran, Roland Siegwart and Igor Gilitschenski Actuation A Modular Folded Laminate Robot Capable of Multi Modal Locomotion 59 Je-sung Koh, Daniel M. Aukes, Brandon Araki, Sarah Pohorecky, Yash Mulgaonkar, Michael T. Tolley, Vijay Kumar, Daniela Rus and Robert J. Wood Combined Energy Harvesting and Control of Moball : A Barycentric Spherical Robot 71 Joseph Bowkett, Matt Burkhardt and Joel W. Burdick Control of Pneumatic Actuators with Long Transmission Lines for Rehabilitation in MRI 84 Melih Turkseven and Jun Ueda Terrain-Dependant Control of Hexapod Robots Using Vision 92 Timon Homberger, Marko Bjelonic, Navinda Kottege and Paulo V. K. Borges Untethered One-Legged Hopping in 3D Using Linear Elastic Actuator in Parallel (LEAP) 103 Zachary Batts, Joohyung Kim and Katsu Yamane Discrete Foot Shape Changes Improve Dynamics of a Hopping Robot 113 Fabio Giardina and Fumiya Iida Grasping 1 Learning Grasps in a Synergy-based Framework 125 Fanny Ficuciello, Damiano Zaccara and Bruno Siciliano Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction 136 Peter Kaiser, Eren E. Aksoy, Markus Grotz, Dimitrios Kanoulas, Nikos G. Tsagarakis and Tamim Asfour Core Actuation Promotes Self-manipulability on a Direct-Drive Quadrupedal Robot 147 Jeffrey Duperret, Benjamin Kramer and Daniel E. Koditschek Experiments with Hierarchical Reinforcement Learning of Multiple Grasping Policies 160 Takayuki Osa, Jan Peters and Gerhard Neumann Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection 173 Sergey Levine, Peter Pastor, Alex Krizhevsky and Deirdre Quillen Improving Grasp Performance Using In-Hand Proximity and Dynamic Tactile Sensing 185 Radhen Patel, Jorge Cañardo Alastuey and Nikolaus Correll Manipulation Learning Object Orientation Constraints and Guiding Constraints for Narrow Passages from One Demonstration 197 Changshuo Li and Dmitry Berenson Meta-level Priors for Learning Manipulation Skills with Sparse Features 211 Oliver Kroemer and Gaurav Sukhatme Automatic Object Modeling Through Integrating Perception and Robotic Manipulation 223 Zhou Teng, Huitan Mao and Jing Xiao ZMP Features for Touch Driven Robot Control via Tactile Servo 234 Zhanat Kappassov, Juan-Antonio Corrales Ramon and Véronique Perdereau Data-Driven Classification of Screwdriving Operations 244 Reuben M. Aronson, Ankit Bhatia, Zhenzhong Jia, Mathieu Guillame-Bert, David Bourne, Artur Dubrawski and Matthew T. Mason A System for Multi-step Mobile Manipulation:​ Architecture, Algorithms, and Experiments 254 Siddhartha S. Srinivasa, Aaron M. Johnson, Gilwoo Lee, Michael C. Koval, Shushman Choudhury, Jennifer E. King, Christopher M. Dellin, Matthew Harding, David T. Butterworth, Prasanna Velagapudi and Allison Thackston Application of Robot Manipulator for Cardiopulmonary Resuscitation 266 Jaesug Jung, Jeeseop Kim, Sanghyun Kim, Woon Yong Kwon, Sang Hoon Na, Kyung Su Kim, Gil Joon Suh, Byeong Wook Yoo, Jin Woo Choi, Jung Chan Lee and Jaeheung Park Experimental Analysis of Human Control Strategies in Contact Manipulation Tasks 275 Ellen Klingbeil, Samir Menon and Oussama Khatib Human-Robot Interaction 1 Hybrid Human Motion Prediction for Action Selection Within Human-Robot Collaboration 289 Ozgur S. Oguz, Volker Gabler, Gerold Huber, Zhehua Zhou and Dirk Wollherr Design and Control of Lightweight Supernumerary Robotic Limbs for Sitting/​Standing Assistance 299 Laura Treers, Roger Lo, Michael Cheung, Aymeric Guy, Jacob Guggenheim, Federico Parietti and Harry Asada Integrated Intelligence for Human-Robot Teams 309 Jean Oh, Thomas M. Howard, Matthew R. Walter, Daniel Barber, Menglong Zhu, Sangdon Park, Arne Suppe, Luis Navarro-Serment, Felix Duvallet, Abdeslam Boularias, Oscar Romero, Jerry Vinokurov, Terence Keegan, Robert Dean, Craig Lennon, Barry Bodt, Marshal Childers, Jianbo Shi, Kostas Daniilidis, Nicholas Roy, Christian Lebiere, Martial Hebert and Anthony Stentz EUROPtus:​ A Mixed-Initiative Controller for Multi-vehicle Oceanographic Field Experiments 323 Frédéric Py, José Pinto, Mónica A. Silva, Tor Arne Johansen, João Sousa and Kanna Rajan Implicitly Assisting Humans to Choose Good Grasps in Robot to Human Handovers 341 Aaron Bestick, Ruzena Bajcsy and Anca D. Dragan Initial Data and Theory for a High Specific-Power Ankle Exoskeleton Device 355 Sebastian Sovero, Nihar Talele, Collin Smith, Nicholas Cox, Tim Swift and Katie Byl Mobile Robots 1 High-Speed Wall-Contacting Drive for Underground Automatic Transport Vehicle 367 Hiroyuki Karasawa, Takuro Okubo, Rui Fukui, Masayuki Nakao and Yuichi Kodama Realizing Robust Control of Autonomous Vehicles 374 You Hong Eng, Hans Andersen, Scott Drew Pendleton, Marcelo H. AngJr. and Daniela Rus Learning to Plan for Visibility in Navigation of Unknown Environments 387 Charles Richter and Nicholas Roy Parallel Manipulation of Millirobot Swarms Using Projected Light Fields 399 Christopher Lawler, Ivan Penskiy, Aaron Sirken and Sarah Bergbreiter Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation 409 Lee Clement, Valentin Peretroukhin and Jonathan Kelly Experimental Validation of a Template for Navigation of Miniature Legged Robots 420 Konstantinos Karydis, Adam Stager, Herbert G. Tanner and Ioannis Poulakakis Perception Fruit Pose Estimation and Stem Touch Detection for Green Pepper Automatic Harvesting 433 Peteris Eizentals, Koichi Oka and Akinori Harada From Localized Shearing to Localized Slippage Perception 443 Van Anh Ho and Shinichi Hirai Fit for Purpose?​ Predicting Perception Performance Based on Past Experience 454 Corina Gurău, Chi Hay Tong and Ingmar Posner Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion 465 Abhinav Valada, Gabriel L. Oliveira, Thomas Brox and Wolfram Burgard Vision-Based Apple Counting and Yield Estimation 478 Pravakar Roy and Volkan Isler Towards Learning to Perceive and Reason About Liquids 488 Connor Schenck and Dieter Fox Aerial Robots 2 Vision-Based Obstacle Avoidance for Micro Air Vehicles Using an Egocylindrical Depth Map 505 Roland Brockers, Anthony Fragoso, Brandon Rothrock, Connor Lee and Larry Matthies Transformable Multirotor with Two-Dimensional Multilinks:​ Modeling, Control, and Whole-Body Aerial Manipulation 515 Moju Zhao, Koji Kawasaki, Xiangyu Chen, Yohei Kakiuchi, Kei Okada and Masayuki Inaba Localization of a Ground Robot by Aerial Robots for GPS-Deprived Control with Temporal Logic Constraints 525 Eric Cristofalo, Kevin Leahy, Cristian-Ioan Vasile, Eduardo Montijano, Mac Schwager and Calin Belta On the VINS Resource-Allocation Problem for a Dual-Camera, Small-Size Quadrotor 538 Kejian J. Wu, Tien Do, Luis C. Carrillo-Arce and Stergios I. Roumeliotis Catching a Flying Ball with a Vision-Based Quadrotor 550 Kunyue Su and Shaojie Shen Experience-Based Models of Surface Proximal Aerial Robot Flight Performance in Wind 563 John W. Yao, Vishnu R. Desaraju and Nathan Michael “On-the-Spot Training” for Terrain Classification in Autonomous Air-Ground Collaborative Teams 574 Jeffrey Delmerico, Alessandro Giusti, Elias Mueggler, Luca Maria Gambardella and Davide Scaramuzza Safe Navigation of Quadrotor Teams to Labeled Goals in Limited Workspaces 586 Sarah Tang, Justin Thomas and Vijay Kumar Grasping 2 Using Vision for Pre- and Post-grasping Object Localization for Soft Hands 601 Changhyun Choi, Joseph Del Preto and Daniela Rus Grasping and Manipulation by Underactuated Hand with Multi-Joint Fingers 613 Takumi Tamamoto, Soichiro Nomura and Koichi Koganesawa Generalizing Regrasping with Supervised Policy Learning 622 Yevgen Chebotar, Karol Hausman, Oliver Kroemer, Gaurav S. Sukhatme and Stefan Schaal Experimental Validation of Contact Dynamics for In-Hand Manipulation 633 Roman Kolbert, Nikhil Chavan-Dafle and Alberto Rodriguez Iterative Visual Recognition for Learning Based Randomized Bin-Picking 646 Kensuke Harada, Weiwei Wan, Tokuo Tsuji, Kohei Kikuchi, Kazuyuki Nagata and Hiromu Onda Mechanism and Control of Whole-Body Electro-Hydrostatic Actuator Driven Humanoid Robot Hydra 656 Hiroshi Kaminaga, Tianyi Ko, Ryo Masumura, Mitsuo Komagata, Shunsuke Sato, Satoshi Yorita and Yoshihiko Nakamura Planning and Control Gait Synthesis for Modular Soft Robots 669 Scott Hamill, Bryan Peele, Peter Ferenz, Max Westwater, Robert F. Shepherd and Hadas Kress-Gazit Discovering and Manipulating Affordances 679 R. Omar Chavez-Garcia, Mihai Andries, Pierre Luce-Vayrac and Raja Chatila Experimental Evaluation of Hybrid Conditional Planning for Service Robotics 692 Ahmed Nouman, Ibrahim Faruk Yalciner, Esra Erdem and Volkan Patoglu Improved Learning of Dynamics Models for Control 703 Arun Venkatraman, Roberto Capobianco, Lerrel Pinto, Martial Hebert, Daniele Nardi and J. Andrew Bagnell Mobile Robots 2 Data Correlation and Comparison from Multiple Sensors Over a Coral Reef with a Team of Heterogeneous Aquatic Robots 717 Alberto Quattrini Li, Ioannis Rekleitis, Sandeep Manjanna, Nikhil Kakodkar, Johanna Hansen, Gregory Dudek, Leonardo Bobadilla, Jacob Anderson and Ryan N. Smith Multi Robot Object-Based SLAM 729 Siddharth Choudhary, Luca Carlone, Carlos Nieto, John Rogers, Zhen Liu, Henrik I. Christensen and Frank Dellaert Particle Filter Localization on Continuous Occupancy Maps 742 Alberto Yukinobu Hata, Denis Fernando Wolf and Fabio Tozeto Ramos Experimental Methods for Mobility and Surface Operations of Microgravity Robots 752 Benjamin Hockman, Robert G. Reid, Issa A. D. Nesnas and Marco Pavone Multi-Sensor SLAM with Online Self-Calibration and Change Detection 764 Fernando Nobre, Christoffer R. Heckman and Gabe T. Sibley Experimental Comparison of Open Source Vision-Based State Estimation Algorithms 775 Alberto Quattrini Li, A. Coskun, S. M. Doherty, S. Ghasemlou, A. S. Jagtap, M. Modasshir, S. Rahman, A. Singh, M. Xanthidis, J. M. O’Kane and I. Rekleitis Human-Robot Interaction 2 Human Pose Estimation from Imperfect Sensor Data via the Extended Kalman Filter 789 Vlad Joukov, Rollen D’Souza and Dana Kulić Influence of Emotional Motions in Human-Robot Interactions 799 Magda Dubois, Josep-Arnau Claret, Luis Basañez and Gentiane Venture Energy Based Control for Safe Human-Robot Physical Interaction 809 Anis Meguenani, Vincent Padois, Jimmy Da Silva, Antoine Hoarau and Philippe Bidaud Psychological Evaluation on Influence of Appearance and Synchronizing Operation of Android Robot 819 Kaori Tanaka, Masahiro Yoshikawa, Yujin Wakita, Yoshio Matsumoto and Kazuhito Yokoi Collective Cognition and Sensing in Robotic Swarms via an Emergent Group-Mind 829 Michael Otte Recognizing Unfamiliar Gestures for Human-Robot Interaction Through Zero-Shot Learning 841 Wil Thomason and Ross A. Knepper Erratum to:​ Application of Robot Manipulator for Cardiopulmonary Resuscitation E1 Jaesug Jung, Jeeseop Kim, Sanghyun Kim, Woon Yong Kwon, Sang Hoon Na, Kyung Su Kim, Gil Joon Suh, Byeong Wook Yoo, Jin Woo Choi, Jung Chan Lee and Jaeheung Park Author Index853 Aerial Robots 1 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_1 Learning Transferable Policies for Monocular Reactive MAV Control Shreyansh Daftry¹  , J. Andrew Bagnell¹   and Martial Hebert¹   (1) Robotics Institute, Carnegie Mellon University, Pittsburgh, USA     Shreyansh Daftry (Corresponding author) Email: daftry@ri.cmu.edu   J. Andrew Bagnell Email: dbagnell@ri.cmu.edu   Martial Hebert Email: hebert@ri.cmu.edu Abstract The ability to transfer knowledge gained in previous tasks into new contexts is one of the most important mechanisms of human learning. Despite this, adapting autonomous behavior to be reused in partially similar settings is still an open problem in current robotics research. In this paper, we take a small step in this direction and propose a generic framework for learning transferable motion policies. Our goal is to solve a learning problem in a target domain by utilizing the training data in a different but related source domain. We present this in the context of an autonomous MAV flight using monocular reactive control, and demonstrate the efficacy of our proposed approach through extensive real-world flight experiments in outdoor cluttered environments. Keywords Transfer learningDomain adaptationReactive controlAutonomous monocular navigationMicro aerial vehicles 1 Introduction Micro Aerial Vehicles (MAVs) are becoming increasingly popular in number of important applications [3]. As these robots aspire for long-term autonomous operation in unstructured environments, designing hand-engineered perception and control software remains a tedious process, even for basic tasks like collision avoidance [2, 5]. However, in recent years learning based methods [11, 13, 15, 16] have become a very powerful alternative to designing hand-engineered perception and control software, even for basic tasks like collision avoidance. However, a major drawback with such data-driven approaches is that knowledge is usually built from scratch, and often involve complex data acquisition and training procedures. In this work, we argue that for many robot tasks it is not even possible to obtain real training data. For example, to train an expensive robotic system for collision avoidance using imitation learning we would also need to obtain examples of failure labels. This may often be catastrophic and dangerous (a crashing helicopter). Thus, it has long been a desirable goal to use alternative means such as synthetic simulations to train models that are effective in the real world. Even for tasks where training data can be obtained, the learned policies only apply to the physical system and environment the model was originally trained on, due to the limited variability of datasets. Moreover, in real-world applications we often encounter changes in dynamic conditions, such as weather and illumination, which change the characteristics of the domain. In all of the above scenarios, a good policy cannot be guaranteed if it is trained by using traditional learning techniques. Therefore, there is incentive in establishing techniques to reduce the labeling cost, typically by leveraging labeled data from relevant source domains such as off-the-shelf datasets or synthetic simulations. Domain adaptation, a method to formally reduce the domain bias, has addressed this problem [1, 7, 14]. However, to date there have been very few attempts to enhance the transferability of learned policies in the context of autonomous robotics. Even fewer have validated them experimentally through real-world experiments. In this paper, we explore these ideas in the context of vision-based autonomous MAV flight [4] in cluttered natural environments, and evaluate how a single policy learned from labeled data from source domain using domain adaptation methods could effectively enable and accelerate learning onto a new target domain. 2 Technical Approach In this section, we describe our proposed approach for learning transferable policies for autonomous MAV flight. Our work is primarily concerned with navigating MAVs that have very low payload capabilities, and operate close to the ground where they cannot avoid dense obstacle fields. We present a system that allows the MAV to autonomously fly at high speeds of up to 1.5 m/s through a cluttered forest environment, using passive monocular vision as its only exteroceptive sensor. 2.1 Learning Reactive Policy Using Imitation Learning Visual features extracted from camera input provide a rich set of information that we can use to control the MAV and avoid obstacles. In previous work, we have proposed an imitation learning based technique [16] to directly learn a linear controller of drone’s left-right velocity based on visual input. Given a set of human pilot demonstrations in cluttered forest environments and the corresponding images, [$$\mathcal {D}=\{x\_i, y\_i\}$$], we train a controller to learn a reactive policy [$$\pi $$] that can avoid trees by adapting the MAV’s heading as the drone moves forward. Ross et al. [17] showed that over several iterations, the learner is guaranteed to converge to an optimal policy [$$\pi \_n$$], based on previous experience, and mimic the behavior atleast as well as the pilot in these situations. However, a major limitation of this approach is that, it can only deal with the minor domain shift induced from sequential prediction tasks, and does not generalize to new environments seamlessly (Fig. 1). [] Fig. 1. The framework for learning transferable policies from demonstrations in simulated source domain to real target domain using Deep Adaptation Network [12]. 2.2 Policy Transfer Using Deep Domain Adaptation In this work, we extend the above approach to learn domain-adaptive policies using labeled information from source domain and unlabeled information from target domain. Let the source domain [$$\mathcal {D}\_s=\{(x\_i, y\_i)\}\_{i=1}^{n\_{s}}$$] have [$$n\_s$$] labeled examples drawn from a probability distribution p and target domain [$$\mathcal {D}\_t=\{(x\_j)\}\_{j=1}^{n\_{t}}$$] have [$$n\_t$$] unlabeled examples drawn from a probability distribution q. Then, the problem can be formulated as follows: Train a model to learn a set of features x that reduce the cross-domain discrepancy, and a policy [$$y =\pi \_{\theta }(x)$$], where y is the left-right velocity. Recently, deep convolutional neural network (CNN) based models and features [10] have been proven to be more competitive than the traditional methods on solving complex learning problems. While they have been shown to adapt to novel tasks [6], the main challenge here is that the target domain has no labeled information. Hence, directly adapting CNN to the target domain via fine-tuning is not possible. Thus, we build upon a recently proposed Deep Adaptation Network (DAN) architecture [12], which generalizes deep convolutional neural network to the domain adaptation scenario. The main idea is to enhance domain transferability in the task-specific layers of the deep neural network by explicitly minimizing the domain discrepancy. In order to achieve this, the hidden representations of all the task-specific layers are embedded to a reproducing kernel Hilbert space where the mean embedding of target domain distributions can be explicitly matched. As mean embedding matching is sensitive to the kernel choices, an optimal multi-kernel selection procedure is devised to further reduce the domain discrepancy. We use a multiple kernel variant of the maximum mean discrepancies (MK-MMD) metric [9] as the measure of domain discrepancy. It is an effective criterion that compares distributions without initially estimating their density functions. The MK-MMD of two distributions p and q is defined as the Reproducing Kernel Hilbert Space (RKHS) distance between mean embeddings of p and q: [$$\begin{aligned} d\_k^2(p,q)\triangleq \Vert \mathrm {E}\_p[\phi (x^s)]-\mathrm {E}\_q[\phi (x^t)]\Vert ^2\_{\mathcal {H}\_k} \end{aligned}$$] (1) where [$$\phi $$] is the characteristic kernel associated with the feature map. The most important property here is that [$$p=q$$] iff [$$d\_k^2(p,q)=0$$]. In order to minimize the domain discrepancy in the context of CNNs, we embed a MK-MMD based multi-layer adaptation regularizer (Eq. 1) to the CNN risk function: [$$\begin{aligned} \min \_{\varTheta } \frac{1}{n\_s} \sum \_{i=1}^{n\_s} J(\theta (x\_i^s), y\_i^s) + \lambda \sum \_{l=l\_1}^{l\_2} d\_k^2 (\mathcal {D}\_s^l, \mathcal {D}\_t^l) \end{aligned}$$] (2) where [$$\varTheta = \{W^l, b^l\}\_{l=1}^l $$] denotes the set of all CNN parameters, [$$\lambda >0$$] is the penalty parameter, J is the cross-entropy loss function and [$$\theta (x\_i^s)$$] is the conditional probability that CNN assigns [$$x\_i^s$$] to label [$$y\_i^s$$]. [$$l\_1(=6)$$] and [$$l\_2(=8)$$] are the layer indices between which the regularizer is effective. The CNN architecture is based on AlexNet [10]. As the domain discrepancy increases in the higher layers [18], we fine-tune the convolutional layers of the CNN (conv4-conv5) using source labeled examples and minimize the domain discrepamcy in the fully connected layers (fc6-fc8). The deep CNN is trained using a mini-batch supervised gradient descent (SGD) with the above optimization framework (Eq. 2). 3 Experiments and Results In this section, we present experiments to analyze the performance of our proposed method of transferring policies for monocular reactive control of MAVs. All the experiments were conducted in a densely cluttered forest area using the commercially available MAV platforms. We use a distributed processing framework, where the image stream from the front facing camera is streamed to a base station over Wi-Fi at 15 Hz. The base station processes these images, and then sends back the desired control commands to the drone. 3.1 Methodology Quantitatively, we evaluate the performance of our system by observing the average distance flown autonomously by the MAV over several runs (at 1.5 m/s), before a crash. Tests were performed in both regions of low and high tree density (approx. 1 tree per [$$6\times 6$$] [$$m^2$$] and [$$3\times 3$$] [$$m^2$$], respectively). For each scenario described below, training data with 1 km of human-piloted flight was collected in the source domain. Tests were then conducted on approximately 1 km of autonomous flights using policies learnt both with and without domain adaptation. As baseline, we compare our results to lower and upper bound: MAV flight using random policy and complete training data, respectively. 3.2 Performance Evaluation Transfer Across Systems. In this experiment we try to answer the question: Can we transfer policies over different physical systems - from one configurations of sensors and dynamics to another? We collect training data using the ARDrone as the source domain, and test on a modified 3DR ArduCopter equipped with a high-dynamic range PointGrey Chameleon camera as the target domain (See Fig. 2a). The sensor system - global shutter vs rolling shutter, image resolution and camera intrinsics are different from that of the ARDrone. Hence, a policy learnt on one system cannot be expected to trivially generalize to the other. Transfer Across Weather Conditions. In this experiment we try to answer the question: Can we transfer policies over different weather conditions - from summer to winter? We collect training data during the summer season as our source domain and test during winter season as our target domain (See Fig. 2b). In this case the domain shift is induced by the difference in visual appearance. While the summer environment is cluttered with dense foliage, the winter conditions are often characterized with the presence of snow and absence of foliage. [] Fig. 2. Experiments and Results for (Row-1) Transfer across physical systems from ARDrone to ArduCopter, (Row-2) Transfer across weather conditions from summer to winter and (Row-3) Transfer across environments from Univ. of Zurich to CMU. Transfer Across Environments. In this experiment we try to answer the question: Can we transfer policies over different environments - from one physical location to another? This is equivalent to using an off-the-shelf dataset as the source domain and testing in a seperate target domain. In particular, we use the Univ. of Zurich forest-trail dataset [8] as the source. The dataset provides a large-scale collection of images from a forest environment along with annotations for trail heading (left, center or right). Using these source labels, we train a policy for MAV reactive control and test at the forest environment near CMU as the target domain (See Fig. 2c). Here, the domain shift is induced by the implicit difference in physical location and nature of the task. Note: It is not possible to compare results to the source domain. 3.3 Experimental Insights The main results obtained in this paper is that learning a transferable policy using the proposed approach can boost performance significantly in the target domain, as compared to simply re-using the learnt policy in new domains. Quantitatively, we show this through extensive outdoor flight experiments over a total distance of 6 km in environments of varying clutter density. Even without any training data, in the target domain, the MAV was able to successfully avoid more than 1900 trees, with an accuracy greater than [$$90\,\%$$]. [] Fig. 3. Qualitative visualization of an example flight in dense forest. The training data was collected from the same environment during summer season (Col-1) and tested during the winter season (Col-2). The image sequence of MAVs on-board view is chronologically ordered from top to bottom and overlaid with color-coded commands issued by the policy learned using our proposed approach. Additionally, we also compute the commands that would have been generated by the policy without domain adaptation (Col-3), for qualitative comparison. We extend our evaluations to qualitatively assess the learned policies during one of the runs from our flight test, as shown in Fig. 3. We show the nature of training data from summer, snapshots of predicted left-right velocity commands in the chronological order of the flight path taken by the MAV. Moreover, we also analyze the the policy learnt without domain adaptation by predicting control commands (offline) using the snapshot images as input. It can be observed that the domain adaptive policy performs better and is able to generalize to the new domain. Furthermore, we observe that for the first two experiments the performance in the target domain is better than that in the source domain. For transfer across physical systems, this can be attributed to the underlying dynamics of the MAV. The ArduCopter has accurate and stable positioning system that allows it to be more resistive to strong winds, which is a major cause of crash for the less stable ARDrones. Moreover, the target domain has a better sensor suite. The increased resolution probably helped in detecting the thinner trees. For transfers across weather condition, we again observe a boost in performance in the target domain. Empirical analysis of the failure cases reveal that percentage of failures due to branches and leaves diminishes greatly in winter conditions resulting in better overall performance. In comparison to the above two experiments, the performance improves only slightly for transfer across environments. The reason for this is that the source labels are very coarse and were collected for a classification task (left, right or center). Hence, we learn that to improve the transferability of motion policies over physical systems, it is also important to explicitly address (1) The domain shift induced by discrepancy in dynamics, (2) The expected failure cases in the target domain, and (3) The discrepancy induced by not only the domain, but also the task. 4 Conclusion In this paper, we have presented a generic framework for learning transferable motion policies. Our system learns to predict how a human pilot would control a MAV in a source domain, and is able to successfully transfer that behaviour to a new target domain. Quantitatively, we show significant boost in performance over simply re-using policies without any explicit transfer, through extensive real-world experiments. We have demonstrated our approach on an autonomous MAV navigation task using monocular reactive control. However, our treatment and findings apply to any aspect of experimental robotics where a system needs to be trained for end-to-end autonomous behaviour based on sensor data. References 1. Baktashmotlagh, M., Harandi, M., Lovell, B., Salzmann, M.: Unsupervised domain adaptation by domain invariant projection. In: Proceedings of the IEEE International Conference on Computer Vision (2013) 2. Daftry, S., Dey, D., Sandhawalia, H., Zeng, S., Bagnell, J.A., Hebert, M.: Semi-dense visual odometry for monocular navigation in cluttered environment. In: IEEE International Conference on Robotics and Automation (ICRA) Workshop (2015) 3. Daftry, S., Hoppe, C., Bischof, H.: Building with drones: accurate 3d facade reconstruction using mavs. In: IEEE International Conference on Robotics and Automation (ICRA) Workshop (2015) 4. Daftry, S., Zeng, S., Khan, A., Dey, D., Melik-Barkhudarov, N., Bagnell, J.A., Hebert, M.: Robust monocular flight in cluttered outdoor environments. arXiv preprint arXiv:​1604.​04779 (2016) 5. Dey, D., Shankar, K.S., Zeng, S., Mehta, R., Agcayazi, M.T., Eriksen, C., Daftry, S., Hebert, M., Bagnell, J.A.: Vision and learning for deliberative monocular cluttered flight. In: Wettergreen, D.S., Barfoot, T.D. (eds.) Field and Service Robotics. STAR, vol. 113, pp. 391–409. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-27702-8\_​26 CrossRef 6. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: Decaf: a deep convolutional activation feature for generic visual recognition. In: Proceedings of the 31st International Conference on Machine Learning (2014) 7. Ghifary, M., Kleijn, W.B., Zhang, M.: Domain adaptive neural networks for object recognition. In: Pham, D.-N., Park, S.-B. (eds.) PRICAI 2014. LNCS (LNAI), vol. 8862, pp. 898–904. Springer, Heidelberg (2014). doi:10.​1007/​978-3-319-13560-1\_​76 8. Giusti, A., Guzzi, J., Ciresan, D., He, F.L., Rodriguez, J.P., Fontana, F., Faessler, M., Forster, C., Schmidhuber, J., Di Caro, G., et al.: A machine learning approach to visual perception of forest trails for mobile robots. IEEE Rob. Autom. Lett. (2016) 9. Gretton, A., Sejdinovic, D., Strathmann, H., Balakrishnan, S., Pontil, M., Fukumizu, K., Sriperumbudur, B.K.: Optimal kernel choice for large-scale two-sample tests. In: Advances in Neural Information Processing Systems (2012) 10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (2012) 11. Levine, S., Finn, C., Darrell, T., Abbeel, P.: End-to-end training of deep visuomotor policies. arXiv preprint arxiv:​1504.​00702 (2015) 12. Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: Proceedings of the 32nd International Conference on Machine Learning (ICML 2015) (2015) 13. Michels, J., Saxena, A., Ng, A.Y.: High speed obstacle avoidance using monocular vision and reinforcement learning. In: Proceedings of the 22nd International Conference on Machine Learning (2005) 14. Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)CrossRef 15. Pomerleau, D.A.: Alvinn: an autonomous land vehicle in a neural network. Technical report, DTIC Document (1989) 16. Ross, S., Barkhudarov, N., Shankar, K.S., Wendel, A., Dey, D., Bagnell, J.A., Hebert, M.: Learning monocular reactive UAV control in cluttered natural environments. In: International Conference on Robotics and Automation (ICRA) (2013) 17. Ross, S., Gordon, G.J., Bagnell, D.: A reduction of imitation learning and structured prediction to no-regret online learning. In: International Conference on Artificial Intelligence and Statistics, pp. 627–635 (2011) 18. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems (2014) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_2 A micro-UAS to Start Prescribed Fires Evan Beachly¹, James Higgins¹, Christian Laney¹, Sebastian Elbaum¹, Carrick Detweiler¹  , Craig Allen² and Dirac Twidwell³ (1) Computer Science and Engineering Department, University of Nebraska, Lincoln, USA (2) U.S. Geological Survey - Nebraska Cooperative Fish and Wildlife Unit, Lincoln, USA (3) Department of Agronomy and Horticulture, University of Nebraska, Lincoln, USA     Carrick Detweiler Email: carrick@cse.unl.edu Abstract Prescribed fires have many benefits, but existing ignition methods are dangerous, costly, or inefficient. This paper presents the design and evaluation of a micro-UAS that can start a prescribed fire from the air, while being operated from a safe distance and without the costs associated with aerial ignition from a manned aircraft. We evaluate the performance of the system in extensive controlled tests indoors. We verify the capabilities of the system to perform interior ignitions, a normally dangerous task, through the ignition of two prescribed fires alongside wildland firefighters. Keywords Micro unmanned aerial systemUASUAVDronePrescribed burnInterior fire ignition 1 Introduction Prescribed fires can reduce wildfire severity [6, 7, 11], control invasive species [2, 10, 20], and improve rangelands for livestock and grazing [15]. However, conducting prescribed fires also puts ground crews at risk of injury or death. Firefighters igniting the interior of an area are surrounded by unburned fuel, and the tool of choice for interior ignition, the drip torch, puts the fire dangerously close to the crew. Changes in wind can smother the personnel in smoke and transform a slow backburn into a fast-moving blaze, leaving firefighters little time to escape or deploy a fire shelter [1]. Burning large acreages introduces additional difficulties, as the fire line may be kilometers long with ravines, dense vegetation, or other difficult-to-escape terrain. Aerial ignition removes the need to have personnel inside the burn area, but existing helicopter-mounted ignition systems [13] are too expensive for most private landowners [21] and introduces the risk of crashing [14]. Firefighters need new tools for interior ignition that reduce risk, yet are low cost and easy to operate, to make them available to the majority of prescribed fire users. In this paper we present an Unmanned Aerial System (UAS) for fire prescription, called the UAS-Rx. The UAS-Rx transforms UASs from those that only remotely measure and monitor fires to a system that can actively manipulate the shape and trajectory of the fire to achieve the desired environmental management goals. Figure 1 shows the results of the UAS-Rx igniting a prescription by dropping delayed aerial ignition spheres onto the invasive Cedar trees in the targeted area. [] Fig. 1. The prototype UAS-Rx returning after starting a prescribed fire with the Loess Canyon Rangeland Alliance [3]. Our vision is that the UAS-Rx would be used at prescribed burns that cannot afford aerial ignition from a manned aircraft. The lightweight UAS could be carried on the back of a firefighter to the burn site, and then be deployed to ignite terrain that is unsafe to enter and ignite by hand. Another advantage of using a UAS for prescribed fires is that it offers an aerial platform for cameras and sensors, allowing the firefighters to maintain situation awareness. Indeed, UASs are increasingly used for remote fire measurement and monitoring [4, 16, 17], including simulations on how to track fire and optimize flight paths in these conditions [8, 19]. Dropping the ignition spheres is similar to dropping wireless sensor nodes, which has been performed using autonomous helicopters [5, 9], and fixed-wing UAS’s [18], but we must also deal with the harsh fire environment. To the best of our knowledge, this is the first autonomous robotic system that has been designed for and used to start prescribed fires. 2 Requirements For the UAS-Rx to be successful, the technical capabilities need to be contextualized in the fire-ignition domain. This context is defined by target areas covering hundreds to thousands of acres, teams of firefighters performing different roles and operating a variety of vehicles, all working under a burn plan and a set of regulations and common practices, and operating in specific ignition situations that make firefighters especially vulnerable. This context and our early studies with fire ecologists, land managers, and firefighters defined an initial set of parameters that have influenced the design of the UAS-Rx: - Must be small and light enough to be carried by a single firefighter. - Must be easily deployable and operable in a hostile environment (e.g. wind gusts, smoke, hot temperatures) and terrain (e.g. canyons, trees, gullies). - Must not increase the potential for uncontrolled fires. - Must align with large body of practices and regulations on how such fires must be conducted. These requirements lead to the design of a UAS-Rx prototype built on a micro-UAS platform, that can be operated from a small laptop (in its current form), that can navigate and drop a fire payload with enough precision to remain within specified regions, and that replicates an accepted form of fire-ignition delivery in a miniaturized and automated fashion. The next section covers key technical elements underlying these themes. 3 Technical Approach 3.1 Design Overview We have developed a prototype UAS-Rx, shown in Fig. 2. It consists of three main parts: a hexacopter (commercially available, Ascending Technologies Firefly UAS), a chute that contains ignition spheres, and a “Dropper” attached underneath the hexacopter. Our design of the UAS-Rx has gone through several revisions that explored different sensing and payload tradeoffs. We present the latest here, for details on prior revisions see [12]. The UAS-Rx is 39 cm tall, 65 cm wide, and has a mass of 1.9 kg at takeoff. [] Fig. 2. Unmanned aerial system for fire prescription (UAS-Rx). [] Fig. 3. Dropper top view. The chute on the UAS-Rx carries 12 delayed aerial ignition spheres, which are used to start the fire. Ignition spheres are a commercially available product designed to be used for aerial ignition from helicopters. The brand used in this work is the Premo Fireball. Each ignition sphere is a 32 mm diameter hollow plastic sphere containing 3 g of Potassium Permanganate. When an ignition sphere is injected with 1 ml of common automotive antifreeze, the Ethylene Glycol in the antifreeze will start an exothermic chemical reaction with the Potassium Permanganate. The ignition sphere will burst into flame 20 to 60 s after injection, depending on the ambient temperature. The use of ignition spheres that were already widely used by the fire community has significantly aided the acceptance of the UAS-Rx. The device for injecting and dropping the ignition spheres, the Dropper, (shown in Fig. 3) is attached underneath the hexacopter by a manual quick-release mechanism. The ignition spheres are gravity-fed to the Dropper by a chute that wraps around the front. The total mass of the ignition spheres and dropper is 782 g. On the Firefly, this payload constrains the maximum flight time to 10–12 min. The system, however, is designed to be self-contained with its own battery, processing, and communication, so that it can be carried by larger multi-rotor or fixed wing UASs with correspondingly longer flight times. 3.2 Dropper Mechanical Design The dropper is responsible for loading, piercing, injecting, and releasing the ignition spheres, and accomplishes this using three motors. The structural components of the Dropper were rapidly prototyped from 3-D printed thermoplastics and laser cut acrylic. Figure 4 shows the loading and release system, a pair of sliding hatches controlled by a single motor. [] Fig. 4. Loading and releasing system. [] Fig. 5. Piercing system. Once an ignition sphere has fallen into the chamber, the pierce motor (see Fig. 5) pulls on the lever arm and drives the ignition sphere onto a 16 Gauge stainless-steel needle. Puncturing the ignition sphere with the needle normally requires approximately 50 N of force. However, the shell of the ignition sphere has ribs and a seam of thicker plastic that can require up to 100 N of force to pierce. The combination of the piercing motor, lead screw, and lever arm can produce an estimated piercing force of 130 N, assuming 80 % loss caused by the lead screw and friction between moving components. As the pierce ram pushes on the ignition sphere, the curved surface on the interior of the chamber centers the ignition sphere onto the needle. This ensures that the needle does not get deflected and bent by an oblique strike on the curvature of the ignition sphere. [] Fig. 6. Injection system Figure 6 shows the system that injects the ignition sphere with antifreeze after it has been pierced. Antifreeze is carried in the syringe, which gets compressed by the injection motor. After compression, the antifreeze travels through the antifreeze transfer tube and out the needle. When the ignition sphere is pulled off the needle, there is 2 mm of clearance between the needle tip and the sphere. This is more than enough to ensure that it will not remain stuck on the needle tip when it needs to be dropped, and account for any variability in the shape of the ignition sphere. 3.3 Dropper Embedded System Design The embedded system was designed to reduce the risk of an ignition within the dropper. This is accomplished by closely monitoring the motors to detect any failures, taking precautions before injecting the ignition sphere, and making the sequence of operations required to inject and drop an ignition sphere an atomic operation from the user’s perspective. The Dropper is controlled by an ATMega2560 microcontroller on a custom-designed printed circuit board. Each motor is controlled by a motor driver with built-in current sensing and over-current protection. A quadrature counter chips track the position of magnetic encoders on each motor. We placed pushbutton switches at the limit of each actuator’s range of motion to calibrate the positions on startup. The processor communicates to the ground station using a 2.4 GHz XBee radio module that has a range of 1 km. While operating a motor, the processor monitors the current draw and position in a 500 Hz control loop. The processor uses the counter to track the actuator’s position, and stop it at the correct place. If the counter stops incrementing or decrementing while the motor is being powered, or if the motor is drawing a large amount of current, the motor is assumed to have stalled, and is stopped to prevent damage. As a failsafe, each operation has a configurable timeout that limits how long the motor will run before the processor considers its next action. Status messages are transmitted from the dropper automatically at a rate of 5 Hz, and inform the operator about what the dropper is trying to do, its state, and any failures that have occurred. Figure 7 shows the details of the procedure that the processor follows to inject and drop an ignition sphere. [] Fig. 7. Procedure to inject and drop an ignition sphere. Wait times and the injection amount can be customized over the radio link, but default to 1 s and 1 ml. The worst case scenario is for an ignition sphere to be injected, but unable to be released. The procedure in Fig. 7 helps reduce the probability that a mechanical failure will lead to this situation by only injecting if the bottom hatch was successfully opened, and if the piercing ram is functional. In the event that the piercing ram is unable to drive back after injection and drop the ignition sphere, the operator is alerted by the critical fire danger flag in the periodic status messages transmitted by the Dropper’s processor. The operator has limited control over the actuators in the dropper. This is to prevent unintentionally injecting an ignition sphere without dropping it. A single command starts the entire inject and drop process. 3.4 User Interface Prescribed burns are highly dynamic, and changes in wind or the progress of the fire may require adjusting the burn plan. The operator needs a clear understanding of the UAS-Rx’s situation in order to react to these changes. To facilitate this, we render a top-down view of the area centered on the UAS-Rx’s takeoff point (which is presumed to be near the operator). The rendered view has icons for the UAS-Rx, the path it has recently traveled, and the current waypoint. This interface could be extended to overlay this information onto pre-downloaded satellite imagery of the area. In addition to this rendered view, the UAS-RX has a downward-facing video camera and analog video transmitter to allow the operator to see where the ignition spheres are landing. The operator is able to place GPS waypoints that the UAS-Rx autonomously flies to using a PID controller, and can also customize the travel speed. To drop ignition spheres, the operator can either press a button to drop a single ignition sphere at the UAS-Rx’s current location, or can specify a customized sequence of periodic drops. 4 Experiments and Results We tested the UAS-Rx both in-lab and at two actual prescribed burns. In-lab tests were conducted mainly to quantify the reliability of the dropper in a controlled setting. The purpose of the prescribed fire tests was to gain information about the kind of missions the UAS-Rx is expected to be able to complete, the fire environment, and to identify ways to further improve it for use at prescribed burns. 4.1 In-Lab Tests The UAS-Rx was extensively tested in our lab and also in an indoor arena where we could test ignitions in a controlled environment. Encoder and motor failures were simulated in order to validate that the software can detect the failures and respond correctly. Communication tests showed that 96 % of status messages are received when the UAS-Rx was 200 m away. Over 120 water injection tests indicated that approximately 90 % of the ignition spheres will be injected with enough antifreeze to ignite. The other 10 % were punctured at a thick part of the shell, and the plastic partially obstructed the needle during injection. During these tests, the needle never became dull, bent, or plugged with plastic, and no sphere became jammed in the system or had difficulty leaving the dropper after injection. 4.2 Loess Canyon Rangeland Alliance Prescribed Burn The first UAS-Rx prescribed burn was conducted with the Loess Canyon Rangeland Alliance [3] in south-western Nebraska. It required coordination with the fire council of the area (which includes the land owners) and the Federal Aviation Administration. Under the guidance of the burn boss, we targeted an area of approximately 40 acres (0.16 km²), within a larger effort to ignite over 2000 acres (8 km²), and involved about 60 fire-fighters for a full day. We performed 5 flights over 3 gullies that were overgrown with Eastern Red-Cedar (an invasive evergreen tree species). Our ignition plan was to hover about 10 m over the cedar trees and drop multiple ignition spheres in each spot to ensure ignition. However, we learned that due to the flammability of the cedar trees, a single ignition sphere was sufficient to ignite a large portion of the gully. The left side of Fig. 8 shows the paths of the five flights we performed and the spots where the UAS-Rx dropped ignition spheres. Note that the UAS-Rx is able to ignite locations within or behind thickly vegetated terrain that a human would have a difficult time accessing (see flight paths 1 and 2, at the top). All five flights successfully ignited their targets. The delay on the ignition spheres ensured that the fire started after the UAS-Rx had left the area. [] Fig. 8. Flight paths and ignition sphere drop locations (white markers) at prescribed burn tests. Left: Loess Canyon Rangeland Alliance (LCRA), Right: Homestead National Monument (HNM). Both images are at the same scale. Map Data [$$\copyright $$]Google, Imagery [$$\copyright $$]DigitalGlobe, Map created at GPSVisualizer.​com This exploratory test was conducted with an earlier version of the UAS-Rx that could hold 30 ignition spheres in an agitated hopper (see the cylindrical container on the UAS-Rx in Fig. 1). Since a single ignition sphere can ignite a large area, we redesigned the UAS-Rx to use a gravity-fed chute, which holds fewer ignition spheres, but is lighter and provides a smoother ball flow. The dropper was redesigned to be able to apply more force, making it more reliable. In regards to the interface, we attached a downward-facing camera to the UAS-Rx so the operator can see if the UAS-Rx is above the target, and also see where the ignition spheres land. 4.3 Homestead National Monument Prescribed Burn The prescribed burn at Homestead National Monument of America tested the latest design of the dropper. It required cooperation with professional fire-fighters and numerous government organizations (FAA, National Parks, Department of the Interior, and others), including needing special permission to fly a UAS at a national monument. This prescribed burn involved 22 firefighters, and burned 23 acres (0.09 km²) in 2 h. During this prescribed burn, firefighters with drip torches ignited the perimeter, while the UAS-Rx ignited the interior. Interior ignition is typically conducted by igniting a line of ground perpendicular to the wind. The downwind side of the line is quickly burned, and the fire runs out of fuel when it reaches the previously burned area. When that happens, another line is ignited. The UAS-Rx flights at this test sought to replicate this strategy. The right side of Fig. 8 depicts the Homestead National Monument burn area. The wind is blowing towards the South. Firefighters ignited a perimeter along the East, South, and West sides of the image. A typical flight proceeded as follows: we set up behind the East perimeter, launched the UAS-Rx to a height of about 15 m, and flew over the perimeter and 200 m into the interior. We then directed the UAS-Rx to fly back to us at a speed of 0.5 m per second while dropping one ignition sphere every 8 seconds (one every 4 m). After it had dropped all 12 ignition spheres, we directed it to return to us and land. The total flight lasts approximately 5 min, giving us over 5 min of reserve flight time. The right side of Fig. 8 shows the flight paths of the 5 tests conducted at Homestead National Monument. Table 1 lists information about each of the 10 prescribed burn test flights. Table 1. Prescribed burn flight data +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | Flight | Flight time | Round trip distance | Max range | Battery voltage before landing | # of Drops | Avg. dropping altitude AGL | +:=======+:============+:====================+:==========+:===============================+:===========+:===========================+ | LCRA 1 | 4.62 min | 270.79 m | 122.82 m | 10.784 V | 4 | 16.38 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | LCRA 2 | 6.02 min | 169.24 m | 73.53 m | 10.673 V | 5 | 12.17 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | LCRA 3 | 4.52 min | 257.31 m | 100.76 m | 10.821 V | 2 | 14.66 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | LCRA 4 | 5.67 min | 310.97 m | 99.49 m | 10.777 V | 14 | 13.19 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | LCRA 5 | 4.47 min | 346.90 m | 151.46 m | 10.764 V | 2 | 20.42 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | HNM 1 | 5.67 min | 373.73 m | 96.06 m | 10.830 V | 12 | 11.05 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | HNM 2 | 5.53 min | 429.34 m | 195.56 m | 10.535 V | 12 | 17.49 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | HNM 3 | 4.73 min | 420.40 m | 200.86 m | 10.946 V | 12 | 20.39 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | HNM 4 | 4.88 min | 466.42 m | 157.37 m | 10.988 V | 12 | 17.23 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ | HNM 5 | 6.32 min | 456.07 m | 116.60 m | 10.691 V | 12 | 16.11 m | +--------+-------------+---------------------+-----------+--------------------------------+------------+----------------------------+ The average dropping altitude was between 11 and 21 m above the ground. This height was high enough prevent the line of sight from being blocked by terrain or vegetation, and provided at least 7 m of clearance over trees, bushes, and fire. Flying any higher would only increase the distance the ignition spheres could be carried by the wind as they fall. The longest flight was HNM 5, which lasted 6.32 min. For this flight, we were sufficiently far enough ahead of the fire line that we had time to fly back over the locations we dropped ignition spheres and collect footage with the downward-facing camera mounted on the UAS-Rx. Figure 9 shows several frames of this footage. [] Fig. 9. Video frames from a flyover of the ignition spheres dropped during the fifth flight at Homestead National Monument. Arrows point to locations ignition spheres were dropped. Of the 12 ignition spheres that were dropped as part of flight HNM 5, only the tenth did not ignite. This ignition sphere took 15 % more time to puncture than normal, indicating that the needle struck a thick spot on the shell of the ignition sphere, such as the seam or a rib, which may have obstructed flow of antifreeze into the ignition sphere. This ignition success rate closely corresponds to the 90 % ignition rate found by the in-lab tests. After examining the logs during the other 4 Homestead flights, we inferred that 6 of the 48 ignition spheres were unlikely to ignite, based on the time it took to puncture and inject each sphere. Despite the fact that some ignitions spheres failed to ignite, we did not discover any unburned patches of land after the fire, as the fire from each ignition sphere was able to spread to cover the gap. Notice in Fig. 9 that the fire from ignition spheres 1 and 2 have joined together. It is probable that the ignition spheres could be spaced further apart than the 4 m we programmed and still yield a connected line of fire. This would allow the current prototype of the UAS-Rx to ignite longer fire lines. In addition to the downward-facing camera, we also attached a temperature sensor to the UAS-Rx. However, it didn’t measure any abnormally high temperatures. It measured an average temperature of 24 C while the UAS-Rx was on the ground, and 17 C while the UAS-Rx was flying 15 m in the air. The average preparation time between flights at the Homestead National Monument was 5 min, which we would like to reduce further. We have some ideas on how to reduce the time needed to reload the UAS-Rx, such as adding a tube so that antifreeze can be refilled without removing the dropper or lifting the UAS-Rx. During these tests, we observed that the fire fighters’ attention is heavily demanded by observing how the fire is progressing, and communicating over their hand-held radios. Manually directing the UAS-Rx requires the operator’s continual focus, therefore more extensive autonomous flight planning would be beneficial. For example, the fire-fighter could draw the perimeter of the area that needs to be burned, and the UAS-Rx could autonomously plan the ignition lines and drop locations, take off, and complete the mission. 5 Conclusion and Future Work Fire-fighters need new tools for interior ignition that are safe and cost-effective. This paper described the design and evaluation of an unmanned aerial system to start prescribed fires from a distance. This unmanned aerial igniter (UAS-Rx), was designed to safely and reliably puncture, inject, and drop ignition spheres, a commercial product designed for aerial ignition from manned aircraft. The UAS-Rx’s mechanical and system design detect and help prevent failures, and reduce the severity of their consequences. The UAS-Rx has demonstrated reliability, with a 90 % ignition rate and no mechanical or system failures occurring in hundreds of test injections, and it has demonstrated effectiveness, by successfully igniting the interior areas at two prescribed fires. The prescribed burn tests gave valuable insight into ways to improve the usability of the UAS-Rx, such as adding a downward-facing camera, reducing preparation time, and increasing autonomy. Our work demonstrates a great potential of unmanned aerial systems as an ignition tool. Although the UAS-Rx prototype presented in this paper has a limited flight time and ignition sphere capacity, the modularity of our Dropper allows us to easily continue our work on a larger UAS in the future. The mechanical design of the dropper can be further refined to be stronger, more light-weight, and easier to resupply. Furthermore, we would like to make the UAS-Rx capable of autonomously planning and flying missions. These improvements should make the UAS-Rx a valuable tool for conducting prescribed burns safely and easily in the future. Acknowledgments This work was supported in part by NSF 1638099, NSF 1539070, and NRI-USDA-2013-67021-20947. We are thankful to the NPS, DOI, FAA, and especially appreciate the efforts of Loess Canyons Rangeland Alliance, Homestead National Monument, and all of the firefighters for letting us learn and participate in their efforts. We would also like to thank Becca Horzewski and Dr. Brittany Duncan of the Nimbus Lab. The NE Cooperative Fish and Wildlife Research Unit is jointly supported by a cooperative agreement between the U.S. Geological Survey, the NE Game and Parks Commission, UNL, the U.S. Fish and Wildlife Service and the Wildlife Management Institute. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. References 1. Rock Creek Rx Entrapment Facilitated Learning Analysis, November 2011. http://​www.​wildfirelessons.​net/​HigherLogic/​System/​DownloadDocument​File.​ashx?​DocumentFileKey=​​9c200d99-b54e-46ea-84ab-370ff5444176 2. Allen, E.A., Chambers, J.C., Nowak, R.S.: Effects of a spring prescribed burn on the soil seed bank in sagebrush steppe exhibiting pinyon-juniper expansion. Western North Am. Nat. 68(3), 265–277 (2008)CrossRef 3. Alliance, L.C.R.: Loess canyons rangeland alliance (2016). http://​www.​loesscanyonsburn​group.​com/​ 4. Ambrosia, V., Wegener, S., Zajkowski, T., Sullivan, D., Buechel, S., Enomoto, F., Lobitz, B., Johan, S., Brass, J., Hinkley, E.: The ikhana unmanned airborne system (UAS) western states fire imaging missions: from concept to reality (2006–2010). Geocarto Int. 26(2), 85–101 (2011)CrossRef 5. Anthony, D., Basha, E., Ostdiek, J., Ore, J.P., Detweiler, C.: Surface classification for sensor deployment from UAV landings. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (2015) 6. Baeza, M., De Luıs, M., Raventós, J., Escarré, A.: Factors influencing fire behaviour in shrublands of different stand ages and the implications for using prescribed burning to reduce wildfire risk. J. Environ. Manag. 65(2), 199–208 (2002)CrossRef 7. Boer, M.M., Sadler, R.J., Wittkuhn, R.S., McCaw, L., Grierson, P.F.: Long-term impacts of prescribed burning on regional extent and incidence of wildfires—evidence from 50 years of active fire management in SW Australian forests. Forest Ecol. Manag. 259(1), 132–142 (2009)CrossRef 8. Casbeer, D.W., Beard, R.W., McLain, T.W., Li, S.M., Mehra, R.K.: Forest fire monitoring with multiple small UAVs. In: Proceedings of the 2005 American Control Conference, vol. 5, pp. 3530–3535, June 2005 9. Corke, P., Hrabar, S., Peterson, R., Rus, D., Saripalli, S., Sukhatme, G.: Autonomous deployment and repair of a sensor network using an unmanned aerial vehicle. In: Proceedings of the 2004 IEEE International Conference on Robotics and Automation, ICRA 2004, vol. 4, pp. 3602–3608, April 2004 10. DiTomaso, J.M., Brooks, M.L., Allen, E.B., Minnich, R., Rice, P.M., Kyser, G.B.: Control of invasive weeds with prescribed burning 1. Weed Technol. 20(2), 535–548 (2006)CrossRef 11. Finney, M.A., McHugh, C.W., Grenfell, I.C.: Stand-and landscape-level effects of prescribed burning on two arizona wildfires. Canadian J. Forest Res. 35(7), 1714–1722 (2005)CrossRef 12. Higgins, J.: Design, Testing, and Evaluation of Robotic Mechanisms and Systems for Environmental Monitoring and Interaction. Master’s thesis, Department of Materials and Mechanical Engineering, University of Nebraska-Lincoln (2016) 13. Hodgson, A., Cheney, N.P.: Aerial ignition for backburning. Aust. Forestry 33(4), 268–274 (1969)CrossRef 14. Ippolito, G., Murray, E.: Two U.S Forest Service Employees & Pilot Die in Helicopter Crash, March 2005 15. Keeley, J.E.: Fire management impacts on invasive plants in the western united states. Conserv. Biol. 20(2), 375–384 (2006). http://​dx.​doi.​org/​10.​1111/​j.​1523-1739.​2006.​00339.​x CrossRef 16. Merino, L., Caballero, F., Martínez-de Dios, J.R., Maza, I., Ollero, A.: An unmanned aircraft system for automatic forest fire monitoring and measurement. J. Intell. Rob. Syst. 65(1–4), 533–548 (2012)CrossRef 17. Merino, L., Martínez-de Dios, J.R., Ollero, A.: Cooperative unmanned aerial systems for fire detection, monitoring, and extinguishing. In: Handbook of Unmanned Aerial Vehicles, pp. 2693–2722. Springer, Netherlands (2015) 18. Pister, K.S.: Tracking vehicles with a uav-delivered sensor network (2001). http://​robotics.​eecs.​berkeley.​edu/​~pister/​29Palms0103/​ 19. Skeele, R.C., Hollinger, G.A.: Aerial vehicle path planning for monitoring wildfire frontiers. In: Wettergreen, D.S., Barfoot, T.D. (eds.) Field and Service Robotics. STAR, vol. 113, pp. 455–467. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-27702-8\_​30 CrossRef 20. Stritzke, J.F., Bidwell, T.G.: Eastern redcedar and its control. Weeds Today 15(3), 7–8 (1984) 21. Wade, D.: Ignition devices for prescribed burning, March 2013. http://​southernfireexch​ange.​org/​SFE\_​Publications/​factsheets/​2013\_​3.​pdf © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_3 Research on Hammering Test System by Unmanned Aerial Vehicles for Infrastructure Surveillance Masahiko Mizui¹  , Ikuo Yamamoto²  , Shunsuke Kimura²   and Masato Maeda²   (1) Common Education Center, Kyushu Kyoritu University, 1-8 Jiyugaoka, Yahatanisi-ku, Kitakyushu 807-8585, Japan (2) Graduate School, Nagasaki University, 1-14 Bunkyo, Nagasaki 852-8521, Japan     Masahiko Mizui (Corresponding author) Email: mizui@kyukyo-u.ac.jp   Ikuo Yamamoto (Corresponding author) Email: iyamamoto@nagasaki-u.ac.jp   Shunsuke Kimura (Corresponding author) Email: bb52115118@ms.nagasaki-u.ac.jp   Masato Maeda (Corresponding author) Email: bb52116134@ms.nagasaki-u.ac.jp Abstract Infrastructure as bridges and dams requires periodic inspection. And it is necessary to record a change over the years. It is required to establish a maintenance cycle. Inspection there is a need to be observed by eyes from a short distance. But, it would need to cost and effort. In fact often cases of a telescope. Difference of the inspection method, there also are differences in the test results. In this study, to develop a testing apparatus of Unmanned Aerial Vehicle for the purpose of labor saving of inspection. It’s a hammering test equipment to be mounted to medium-size UAV. Safely to realize the hammering test in UAV hovering state. 1 Introduction Target of research is the development of the hammering test equipment to loaded on multi-rotor UAV. The purpose is reduced to cost and labor for the maintenance of social infrastructures. In Japan, Ministry of Land, Infrastructure and Transport (MLIT) is advocated the “i-construction”. It is recommended the efficiency and added value of the work of the construction and civil engineering by the Information and Communication Technology (ICT). The purpose to improvement of wages and ensure laborer are applying ICT and robot technology. Infrastructure as bridges and dams requires periodic survey. And it is necessary to record a change over the years. It is required to establish a maintenance planning cycle. Inspection there is a need to survey by eyes from to approach short-range. In fact, often survey cases to use a telescope. But, there are also differences in the results of surveyors. Cause, it is the judgment of only visual at the surveyors. In addition to the short-range survey, combination of hammering test is recommended. Hammering test is the inspection by the surface impact, which is a non-destructive method for the internal by the reflect sound. Hammering test has been commonly used. In order to approach the bridges and dams, it is necessary to construction of scaffolding. The other to the approach by the rope (rope access) and bucket of vehicle for high lift was chosen. UAV is able to vertical takeoff and hovering near the inspection surface. Aerial photography by UAV is popular for their advantages. Therefore, we are development to small and lightweight hammering test equipment to loaded on multi-rotor UAV. The rotary hammering test [1, 2] is shown that non-destructive inspection by a small slapping sound. In order to realize the hammering test by the UAV, it is necessary to launch a hammer to the horizontal and vertical direction from the aircraft. In hovering state UAV is difficult still by the eddy flow. It is so unstable even the distance and altitude of the inspection surface. Also it requires to safely by a long range distance impact hammer to the inspection surface. And needs to reduce of reaction force at the hammer collision by the impact force for the UAV. In this study, to verify a prototype pneumatic device and the pulley extrusion device for the hammer drive system that generates a hammering sound. Also, examined sound pressure and sound collection methods required hammering test were compared with the conventional hammering test. 2 Approach to the Inspection Surface by the UAV 2.1 Stabilization of Flight Multi-rotor UAV is to stabilize [3–5] the flight, GPS is used in addition to the accelerometer, gyro, barometer, compass. The control algorithm of the flight stabilization is to achieve an autonomous flight on the basis of the map information. In recent years, it has been added to ensure the safety distance of the earth’s surface and the front direction by the ultrasonic sensor. In addition, it has achieved recognition and avoidance of the obstacle and suppression of the progress by the image sensor. Stabilization of the flight by the sensor is to facilitate the operation. However, it is known to flight become unstable by the flight loses [6, 7] measuring element of the sensor. So, it assume access to the inspection surface. The inspection surface is assumed to slope and buildings wall and pier bottom. Flight control modes of a typical multi-rotor UAV is plural set. In this study, attitude mode (ATTI) using the accelerometer gyro barometer and compass. And it assume the GPS mode added with the position and altitude information by GPS to ATTI mode. 2.2 The Problem with GPS Mode GPS mode is to improve the measurement accuracy by many capture each of the radio signal from the GPS satellite group that moves the sky. To the measurement of the position information, it is necessary to capture the minimum of four satellites. It is possible to correct the positional information of the three-dimensional by complement capture the number of satellite increases. To capture the satellite, it is necessary that the sky is opened. However, UAV is close to the slopes and building a wall for the shooting of the inspection surface and capture the number of satellite is reduced because of the wall. Thus, measurement accuracy of the position information is reduced and the flight of the UAV becomes unstable. In addition, UAV is close to the pier bottom and capture the number of satellite is reduced. Thus, when the close-up of slope and buildings walls and pier bottom using UAV, it is preferably performed by ATTI mode. The operator should be carried out frequently the attitude maneuvers of aircraft, but prevent the disturbance of the great attitude by GPS. 2.3 Structure and the Turbulence For flight altitude of maintenance using the barometer in ATTI mode. When the barometer measure the atmospheric pressure in flight, the reduction of the pressure by the crosswind to the aircraft is a problem. If the decrease in the measurement value of the atmospheric pressure, the flight controller of UAV adjusts the rotor output in order to stabilize the altitude. It is known that turbulent is generated by the wind flow is blocked in the structure around, such as slopes and buildings walls and pier bottom. When you operate the UAV from the visual and FPV of the video, it is problem instability of unintentional high degree. If the aircraft is affected to turbulence, it is necessary to consider the influence of barometric addition to aircraft flows. The flight to the time of the wind weakens calm in order to avoid this. 3 The Hammering Test Equipment to Loaded on UAV 3.1 Specification Hammering test is the inspection by the surface impact, which is a non-destructive method for the internal by the reflect sound. Testing is done the internal situation assessment from the reflected sound with hammer collision on the surface. The goal of this study is the realization from the UAV in flight of hammering test. Hammering test equipment should not exceed takeoff weight. Lightweight is desirable in consideration of the dynamics of the UAV. In case of flight a building wall or slope face or pier bottom is relatively unstable. UAV will increase the risk of collision. Therefore, the hammer produced by aluminum pipe ([$$ {\upvarphi} $$]6 (mm), 1.0 (m)) that fitted with a M5 iron bolt to the tip. The slide mechanism is composed of ABS resin pipe (inner diameter [$$ {\upvarphi} $$]6 (mm)) through the hammer. Thereby, it was realized hammer slide stroke of maximum 650 (mm). In this study, to verify a prototype pneumatic device and the pulley extrusion device for the hammer drive system that generates a hammering sound. 3.2 Pneumatic Device It is verify the pneumatic drive to the hammer of the slide mechanism. The prototype of ducted fan is driven by a brushless motor. To generate a positive pressure and negative pressure switching the direction of rotation of the brushless motor. Hammer is extruded by positive pressure. In the reverse, hummer go back to the standby position by negative pressure. The model for the 3D printer was designed in a 3D CAD (Figs. 1 and 2). [] Fig. 1. 3D CAD model (Ducted fan) [] Fig. 2. Pneumatic device The equipment weighs is 205 (g). Resistance of cylinder and piston is stabilized at a maximum speed of about 0.4 (m/S) by the frictional. Direct drive (Rack and pinion mechanism) is become unstable flight by cylinder reaction force of the hammer collision. Reduction of the collision force due to the control of the air pressure is important. Piston is moved by change of internal pressure. Therefore, time lag is occur about 0.5 (s) from the start-up. Time lag becomes a weakness of FPV control. 3.3 Pulley Extrusion Device It is verify the pulley extrusion device to the hammer of the slide mechanism. The piston is moved by DC motor with decelerated gear to 1/5. No load of motor is 7500 (rpm), and stalling torque of motor is 4.12 (N\*m). The aluminum pipe is moved by a pulley ([$$ {\upvarphi} $$]37 mm) on the output shaft (Fig. 3). [] Fig. 3. Pulley extrusion device Hammer collision speed of the motor voltage 10 (V) is about 1.9 (m/s). The equipment weighs is 174 (g). The ABS resin pulley is produced by 3D printer. Resin pulley and aluminum pipe are slip each other. Therefore, rubber packing is mount in the center of the pulley. Hammer to prevent the drop off from the cylinder, providing an end stop at both ends of the hammer. When the motor is stationary, it can be moved freely when 0.02N or more force to the slide direction is applied. In this way escape excessive reaction force of the piston. The DC motor is controlled by Arduino. The hammer is moved smoothly even for the switching of the forward and reverse rotation of the motor. In this study is adopt to pulley extrusion device. 4 Verification of Hammering Test 4.1 Method of Sound Picked Up The sound picked up using a condenser microphone ([$$ {\upvarphi} $$]10 (mm), frequency: 20–20 (kHz)). The sound waves from microphone record for IC recorder (weight: 58 (g), memory 8 (GB)). The noise factors are the brushless motor and wind noise at props. Therefore, we designed a component for sound sip transferred solid-borne sound from the cylinder unit. 4.2 Experimental Setup of Hammering Test There were fabricated that assumes defective condition by rectangular concrete of specimen (W × 40, D × 40, H: × 50(mm)/weight: 170(g)). Assumes defective condition makes a hole on the top center ([$$ {\upvarphi} $$]6 (mm), depth 30 (mm)). The hammering sound perform by hammer for fixed specimen on the side. The distance between specimen and the hammering test equipment was set to 500 (mm). DC motor voltage of hammering test equipment is changed to 4–10(V) were measured hammer speed. Sound data that collected by the microphone was analyzed on a PC by a digital IC recorder. Sampling rate is 44.1 (kHz). DFT processing was verified against any point of hammering sound area in the Sound Pressure Level (SPL) (Fig. 4). [] Fig. 4. The difference in attenuation and sound pressure peak from the SPL First, it was hammering test [8] by manual operation. The weight of the test hammer is 232 (g). Hammering sound by manual operation has been converted with 0.67N from the load cell measurement. Figure 5 can be checked the analysis point of DFT was specified to the maximum sound pressure after the shock (N3) comparison. Defective specimen is low output in the region of 15 kHz–20 kHz. This can be recognized as the difference in audible hammering sound in the cause. [] Fig. 5. Magnitude spectrum 4.3 Experimental Results DC motor voltage of hammering test equipment is changed to 4/6/8/10(V) were measured hammer speed. Motor from the device starting rotation, and the motor is stopped with a hammer of a collision. The distance between specimen and the hammering test equipment was set to 500 (mm). From the motor driving sound was calculated moving speed by the time of start–stop (Fig. 6). [] Fig. 6. SPL of normal specimen (4/10(V)) Collision speed of the hammering test equipment is 0.76 (m/s)–1.92 (m/s). Hammering test equipment sound at 10 (V) has been converted with 0.22 (N) from the load cell measurement. This is 1/3 of the force of manual operation. The difference of attenuation after hummer collision by the collision speed from the SPL that captured to the PC can be checked. It was carried out similar verification of deficient specimen. The maximum sound pressure (N1) at collision were compared of normal and defective specimen was not a big difference in the motor voltage. Therefore, a focus is attenuation of hammering sound. A big collision (8 10(V)) are definite change cannot be confirmed. The difference in attenuation situation of 6 V (Fig. 7) when compared to the SPL of normal data can be checked. Each other specimens are maintained at a high sound pressure until after the collision 0.011 (s). However, it can be confirmed differences between the normal specimen and the defective specimen on the attenuation on until 0.07 (s). Focusing on SPL peak (N5) is compared in the DFT (Fig. 8). [] Fig. 7. SPL of normal and deficient specimen (6 (V)) [] Fig. 8. Magnitude spectrum (6 V) Defective specimen is low output in the region of 15 (kHz)–20 (kHz). This is almost the same as the trend of the manual operation. Once the hammering sound disappeared after 0.195 (s) by the attenuation. Hammering sound of the defective specimens feel to “dull hollow sound”. Sound wave attenuation by defective could be confirmed from the audible and the analysis of the recorded data. 5 Conclusion We have developed hammering test equipment to loaded on multi-rotor UAV. The equipment weighs is 174 (g). It was realized hammer slide stroke of maximum 650 (mm). The hammering test equipment is 1/3 of the force of manual operation. Defective specimen is low output in the region of 15 (kHz)–20 (kHz). There is a change in the attenuation trend of sound due to the collision speed of the hammer. Sound wave attenuation by defective could be confirmed from the audible and the analysis of the recorded data. References 1. Nakayama, A., Sonoda, Y., Miyoshi, A.: An experimental study on the sound pressure characteristics of the rotary hammering test. Proc. Japan Concr. Inst. 30(3), 1729–1734 (2008) 2. Sonoda, Y., Nakayama, A., Miyoshi, A.: A fundamental study on diagnostic mechanism of the rotary hammering test by acoustic analysis. Kozo Kogaku Ronbunshu. J. Struct. Eng. 54(A), 599–606 (2008) 3. Mettler, B., Kanadev, T.: Attitude control optimization for a small-scale unmanned helicopter. In: AIAA Guidance, Navigation, and Control Conference and Exhibit, Denver, CO, 14–17 August 2000 4. Adachi, S., Hasimoto, S., Miyamori, G., Tan, A.: Autonomous flight control for a large-scale unmanned helicopter - system identification and robust control systems design. Trans. Inst. Electr. Eng. Japan D, Publ. Ind. Appl. Soc. 121(12), 1278–1283 (2001) 5. Hoffmann, G., Huang, H., Waslander, S., Tomlin, C.: Quadrotor helicopter flight dynamics and control: theory and experiment. In: Proceedings of the AIAA Guidance, Navigation, and Control Conference, AIAA 2007-6461 (2007) 6. Mizui, M., Yamamoto, I., Ohsawa, R.: Effects of propeller-balance on sensors in small-scale unmanned aerial vehicle. IOSRJEN 2(8), 23–27 (2012)CrossRef 7. Mizui, M., Yamamoto, I., Ohsawa, R.: Resonance analysis of the UAV Rotor-arm part. IOSRJEN 2(8), 28–32 (2012)CrossRef 8. Fujii, H., Yamashita, A., Asama, H.: Hammering diagnosis algorithm with automated calibration. Trans. JSME 82, 1–18 (2016). p. 15-00426 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_4 Uncertainty Quantification for Small Robots Using Principal Orthogonal Decomposition Konstantinos Karydis¹   and M. Ani Hsieh²   (1) Department of Electrical and Computer Engineering, University of California, 900 University Ave, 357 Bourns Hall B, Riverside, California 92521, USA (2) Department of Mechanical Engineering and Mechanics, Drexel University, 3141 Chestnut Street, Randell 115, Philadelphia, Pennsylvania 19104, USA     Konstantinos Karydis (Corresponding author) Email: kkarydis@ece.ucr.edu   M. Ani Hsieh Email: mhsieh1@drexel.edu Abstract The paper reports on a new data-driven methodology for uncertainty quantification in centimeter-scale robots. We employ tools from functional expansion-based methods, the Karhunen-Loeve (KL) decomposition in particular, to identify appropriate reduced-order models of robotic systems through empirical observations, and discover underlying dominant dynamical behaviors of a system in the presence of uncertainty. The approach is applied to a quadrotor aerial vehicle tasked to hover at various heights from the ground. Several experimental data sets are collected to extract dominant modes. First-order modes correctly capture expected behaviors of the system, while higher-order modes quantify the degree of uncertainty at different hovering conditions. The information provided by this model can be used to develop robust controllers in the face of aerodynamic disturbances and unmodeled nonlinearities. Keywords Uncertainty quantificationPrincipal orthogonal decompositionCentimeter-scale robotsUnmanned aerial vehicles 1 Introduction We witness an increased popularity of inexpensive, centimeter-scale flying, swimming, crawling, and rolling vehicles. As vehicle sizes become smaller, dynamics becomes significantly more impacted by interactions with the environment. For example, careful placement and design of environmental barriers can impact [1] navigation strategies of “wild bodies” (vehicles that are only capable of random motions). Similarly, environmental geometry can effectively “herd” collectives of autonomous robots to distinct locations within the workspace [2]. Other examples include underwater vehicles operating in turbulent flows [3, 4]. To successfully address the challenges posed by the inherently uncertain environment for centimeter-scale robots, it is important to supply control design with models that can capture or quantify such uncertainties. In general, uncertainty quantification can be performed in various ways. One way is to use an underlying first-principles model and data from physical processes. Examples include extending deterministic models to stochastic regimes with probabilistic guarantees on the model fidelity [5] and using underlying models as prior information [6] when training a target Gaussian Process model [7]. A different approach is to employ data-driven strategies. Examples include kernel methods [8] (such as Volterra models [9]), or genetic algorithms that distill physical laws by selecting nonlinear terms in ode models [10]; see [11] for a general overview. This paper reports on a data-driven approach based on functional expansion methods. The aim is to identify appropriate reduced-order descriptions of robotic systems through empirical observations, and discover the underlying dominant (or principal) dynamical behavior of a system of interest in the presence of uncertainties and/or disturbances resulting from interactions with the environment. In particular, we use the Karhunen-Loève (KL) decomposition [12, 13], also known as the Principal Orthogonal Decomposition (POD), which is the generalization of the classical principal component analysis (PCA) to dynamical systems. Similar to PCA, KL/POD decomposes the system into principal dynamical modes consisting of principal spatial and principal temporal modes. As such, KL/POD not only enables the development of reduced-order descriptions for any system of interest, but it can also provide critical insight into the synthesis of robust feedback control strategies for systems subject to uncertainties and/or disturbances that are difficult to model from first principles. We present a procedure that leverages the KL decomposition or POD to extract the primary modes within the context of operating centimeter-scale robots in uncertain environments. We consider a quadrotor aerial vehicle, tasked to hover under aerodynamic disturbances and unmodeled nonlinearities. Several experimental data sets of hovering at different heights are collected and analyzed. A single model capable of describing and predicting the principal dynamics of the quadrotor hovering behavior at varying distances from the ground is then developed. The ability to better model the underlying dynamics only through observations provides useful information toward the design of robust control algorithms for all types of autonomous vehicles. 2 Methodology We consider general dynamical systems of the form [$$\begin{aligned} \mathbf {\zeta }\_{k+1} = \mathbf {f}(\mathbf {\zeta }\_k) \end{aligned}$$] (1) that evolve on a finite dimensional manifold M. While the dynamics do not need to be discrete, the discrete form is retained since we assume the dynamics can be observed and sampled over time and therefore the resulting data to be analyzed is in fact discrete. In (1), [$$\mathbf {\zeta }\_k \in \mathbb {R}^n$$] is the state of the system at time step k. 2.1 Background We leverage the Karhunen-Loève (KL) Decomposition [12, 13] for the identification and extraction of the principal dynamic modes of a system with dynamics (1). The KL Decomposition is also called the Principal Orthogonal Decomposition (POD) [13], or Principal Component Analysis (PCA) for time-dependent systems [12, 14], and is closely related to standard singular-value decomposition. Let [$$\mathbf {g}:M \rightarrow \mathbb {R}^n$$] denote the vector-valued observables for (1) where the observations at timestep k are denoted as [$$\mathbf {g}(\mathbf {\zeta }\_k)$$] for [$$k=0, \ldots , T$$]. The KL decomposition or POD, decomposes [$$\mathbf {g}$$] into its principal components as [$$\begin{aligned} \mathbf {g}(\mathbf {\zeta }(t)) = \sum \alpha \_i(t)\mathbf {\phi }\_i + \mathbf {r}(t), \end{aligned}$$] (2) where [$$\alpha \_i(t)$$] is purely a function of time, [$$\mathbf {\phi }\_i \in \mathbb {R}^n$$], and [$$\mathbf {r}(t)$$] denotes the residual. As in PCA, the KL decomposition identifies the principal subspace that [$$\mathbf {g}$$] resides in by minimizing the mean squared error, i.e., the norm of the residual [$$\mathbf {r}(t)$$]. Different from PCA, KL/POD decomposes [$$\mathbf {g}$$] into its temporal, [$$\alpha \_i(t)$$], and spatial, [$$\phi \_i$$], eigenmodes and effectively generalizes PCA to time-dependent systems. Dynamics (1) is approximated by the observables [$$\mathbf {g}(\mathbf {\zeta }\_k)$$] for [$$k=0, \ldots , T$$] as [$$\begin{aligned} \mathbf {f}(\mathbf {\zeta }\_k) \approx \sum \_{i = 1}^{D} \alpha \_i(t) \phi \_i, \end{aligned}$$] (3) where [$$D \in \mathbb {Z}$$] are the D most dominant modes extracted from [$$\mathbf {g}(t)$$]. As in PCA, “dominant” is determined by identifying the eigenmodes associated with the largest eigenvalues of the correlation matrix computed from the observables [15]. 2.2 Methodology This work focuses on identifying the dominant dynamic modes for hovering UAVs (quadrotors in particular) at varying altitudes. Rather than perform exhaustive system identification at varying hovering altitudes, we leverage the KL decomposition to identify the principal dynamics. The objective is to identify and construct an appropriate set of basis functions to model the vehicle’s dynamics for a range of hovering heights. Identification of the main dynamical modes could lead to better modeling higher-order effects—such as ground effects—and providing a unified framework to synthesize stabilizing feedback controllers. We begin with considering the dynamics of a quadrotor tasked to hover at a fixed altitude subject to no ground effects, i.e., hovering in free space. We assume the free space hovering dynamics follows a standard second-order system response whose transfer function is given by [$$\begin{aligned} \frac{Y(s)}{U(s)}=\frac{\omega \_{n}^{2}}{s^2 + 2 \zeta \omega \_n s + \omega \_{n}^{2}} \end{aligned}$$] where [$$\omega \_n$$] denotes the natural frequency of the system and [$$\zeta $$] denotes the damping ratio. The time domain response for this system subject to a unit step input is [$$\begin{aligned} y(t)=1-\frac{1}{\root \of {1-\zeta ^2}}\exp {-\zeta \omega \_n t}\sin (\omega \_n \root \of {1-\zeta ^2}t+\theta ) \end{aligned}$$] (4) with [$$\theta = \cos ^{-1}\zeta $$]. Let [$$\mathbf {g}\_{h\_0}$$] denote the vector-valued observables for hovering in free space and let [$$\alpha \_{1}^{h\_0}(t)$$] and [$$\phi \_{1}^{h\_0}$$] denote the first principal temporal and spatial modes of [$$\mathbf {g}\_{h\_0}$$]. Then, [$$\alpha \_{1}^{h\_0}(t)\phi \_{1}^{h\_0}$$] would result in a response of the form given by (4). Let [$$\mathbf {g}\_{h\_1}$$], [$$\ldots $$], [$$\mathbf {g}\_{h\_N}$$] denote the vector-valued observables for hovering at heights [$$h\_1> h\_2> \ldots > h\_N$$]. We assume ground effect disturbances can be modeled as higher-order terms cascaded in series with the base dynamics as shown in Fig. 1. [] Fig. 1. Block diagram of the baseline system dynamics and candidate higher-order terms introduced by external disturbances or unmodeled nonlinearities. To identify the dominant dynamic modes, let [$$\mathcal{B}$$] denote the basis set of dynamical modes with [$$\mathcal{B}\_0 = \alpha \_{h\_0}(t)\phi \_{h\_0}$$] as the first or base primary mode. In other words, [] is the dominant free space hovering dynamics obtained via the KL decomposition of [$$\mathbf {g}\_{h\_0}$$]. We propose a procedure similar to the Gram Schmidt orthogonalization process to obtain the remaining basis functions to construct [$$\mathcal{B}$$]. Let [$$\bar{\mathbf {g}}\_{h\_i}$$] denote the L2 normalized version of [$$\mathbf {g}\_{h\_i}$$]. We first perform a KL decomposition on [$$\hat{\mathbf {g}}\_{h\_i} = \bar{\mathbf {g}}\_{h\_i}-\sum \_{j=0}^{i-1} \mathcal{B}\_j$$]. We then set [$$\mathcal{B}\_i = \alpha \_{1}^{h\_i}(t)\phi \_{1}^{h\_i}$$]. We analyze its frequency spectrum of [$$\mathcal{B}\_i$$] and if it is different from the frequency spectrum of a standard white noise process, we add [$$\mathcal{B}\_i$$] to [$$\mathcal{B}$$]. The process is repeated in ascending order for [$$i = 1, \ldots , M$$]. The procedure is summarized in Algorithm 1. Note that lines 6 and 8 in Algorithm 1 denote the KL/POD decomposition and Fast Fourier Transform, respectively. The resulting dynamics of the system is [$$\begin{aligned} y(t) = h\_{des}\sum \_{j=0}^{M}\frac{a\_j(h\_{des})}{h\_j}\mathcal{B}\_j \end{aligned}$$] (5) where [$$h\_{des}$$] denotes the desired hovering height and [$$a\_j(h\_{des})$$] are scalar functions which we call altitude scaling factors. These factors are computed so that the response given by (5) best fits the training data sets given by [$$\mathbf {g}\_{h\_0}, \ldots , \mathbf {g}\_{h\_M}$$]. [] In this work, we assume all the dynamics are captured within the first principal mode of every [$$\mathbf {g}\_{h\_i}$$]. In general, the principal dynamics of [$$\mathbf {g}$$] may consist of the first D modes. Algorithm 1 can be extended for these scenarios by repeating Lines 8–10 for the D modes. Lastly, the procedure can be similarly extended to further quantify measurement and other sources of uncertainties. 3 Application to a Hovering Quadrotor The previous section focused on the development of the general approach. The purpose of this section is to make the aforementioned steps explicit. 3.1 Experimental Setup We use an AscTec Hummingbird quadrotor tasked to hover at specific distances from the ground, [$$h\_{des}\in \{2,11,20,50\}$$] cm. To isolate the hovering dynamics, the robot is physically constrained to move along the vertical direction; see Fig. 2. [] Fig. 2. The experimental setup. Nylon cord is attached to the support structure to constrain the problem in one dimension. The desired hovering altitudes have been selected so to highlight the impact of ground effect in the vehicle’s dynamics. The highest altitude [$$h\_{des} = 50$$] cm corresponds to hovering in free space, that is, out of ground effect. This selection is consistent with aerodynamics-based studies which suggest that the ground effect diminishes at heights between three [16] to five [17] times the radius of the equipped propeller; our vehicle is equipped with propellers of 10 cm radius. The highest altitude also serves as the baseline. The lowest desired altitude of 2 cm brings the vehicle at the heart of the ground effect; as we will show shortly, our approach is able to identify the higher-order dynamics that manifests itself when flying very close to the ground. The two intermediate heights provide additional data points to allow us to identify the altitude scaling factors [$$a\_j$$] in (5). [] Fig. 3. 250 experimental trials of hovering at 2 cm, 11 cm, 20 cm, and 50 cm. Figure 3 shows our experimental data sets. We ran a total of 250 closed-loop¹ trajectories for each desired hovering altitude, and captured the response of the system using a VICON motion capture camera system. Each trajectory lasts for 15 s. The response for the case of [$$h\_{des} = 50$$] cm closely resembles the expected response of a second-order system (4). The responses for the two intermediate cases are qualitative similar, suggesting that the dynamics at these two altitudes should be the same. On the contrary, the response of the system when tasked to hover very close to the ground is fundamentally different, strengthening the hypothesis that higher-order dynamics are present at this operating regime. 3.2 Identification of Principal Dynamic Modes To provide evidence of the robustness of our approach we consider two case studies. In each, one of the intermediate hovering height data sets is excluded from the analysis, and is used only for validation purposes later in Sect. 3.3. In detail, in Case 1 we remove the data set corresponding to [$$h\_{des}=11$$] cm, while in Case 2 we remove the data set corresponding to [$$h\_{des}=20$$] cm. Results for Cases 1 and 2 are shown in the left– and right-hand side of the page at Fig. 4. We begin by constructing the L2 normalized versions of the remaining data sets. Then, we run the POD analysis on the normalized data. Figures 4a–b depict the identified primary mode for each case. Two key observations are in order. First, from Fig. 3 we note that the system is essentially a one-dimensional system and thus the analysis results in a single primary spatial mode with a corresponding temporal mode. Second, from Figs. 4a and b it can be readily verified that in both cases, the primary mode for the highest (normalized) altitude captures exactly the expected mean behavior. This is not surprising since any deviation from the mean is purely due to measurement and actuation noise. This primary mode will thus serve as the first principal mode. Next, we normalize the extracted first principal mode according to the appropriate intermediate hovering height, and subtract it from the latter data set. We then perform POD analysis on the output of the above operation. The primary mode is reported in Figs. 4c–d. An interesting observation is that the extracted mode is similar in both cases despite the fact that the desired intermediate height is different. This observation offers further evidence in support of the hypothesis that the dynamics of the vehicle at these two operation points is similar. This extracted primary mode will serve as the secondary principal mode. [] Fig. 4. Identified principal dynamical modes are marked with an ‘x’, and lie within the one standard deviation of the data set they relate to. Primary (a)–(b), secondary (c)–(d), and ground-effect (e)–(f) modes are shown on the left for Case 1, and on the right for Case 2. Lastly, we perform the above steps for the lowest height. The extracted primary modes for Case 1 and 2 are shown in Figs. 4e and f, respectively. This last mode serves as the third principal mode; to better portray that this mode is mainly driven by the ground effect, we will call it ground-effect principal mode. 3.3 Data Reconstruction Using Principal Dynamic Modes The identified principal modes can be used to predict the behavior of the system when tasked to hover at other altitudes. This can be achieved by applying (5) once the appropriate scaling functions, [$$a\_j$$], have been identified. To identify [$$a\_j$$] we perform least squares to fit the available training data sets, for each case, and then take the average of the reported values. The outcome of this procedure is reported in Fig. 5. The first principal mode is always present, while the ground effect mode is primarily present when very close to the ground. The secondary principal mode is mainly responsible for shaping the behavior of the system when the ground effect mode is not dominating all other higher-order modes. Using the identified values for the scaling factors, we can now reconstruct and predict the behavior of the system at all available hovering altitudes as Fig. 6 shows. The solid red curves (in color version) denote the predicted behavior of the system using the POD-derived model (5). Dashed curves denote the experimental averages, while the solid gray curved indicate one standard deviation around the (experimental) mean. It can be observed that the prediction is very accurate in all cases, even though partial information was used. [] Fig. 5. Altitude scaling factors calculated by the principal dynamical modes. [] Fig. 6. Reconstructed data plotted against sample means (dashed black curves) and one standard deviation around the latter (solid gray curves). Remark 1 The scaling functions were chosen to best fit the data in a least squares sense. However, it may be possible to further tune the scaling functions to improve the model’s prediction ability through some other optimization procedure. While the reconstruction of the sample means is straightforward, reconstruction of the higher-order statistics can be more subtle. This is not surprising given the nature of the KL decomposition. One direction for future work is to use the technique to better characterize the nature of such uncertainties. 4 Conclusions To the authors’ awareness, this is the first attempt to empirically extract and characterize the hovering dynamics of quadrotors through the classic KL decomposition. The proposed approach captures both the expected behavior in free space—that is, of a second-order system—and higher-ordered effects that are observed when the vehicle is under the influence of environmental uncertainties such as ground effects. As higher-order effects are typically difficult to capture, it makes sense to consider a data-driven strategy focused on identifying the principal dynamics of the system. The proposed strategy is a general data-driven approach toward the identification of the dominant dynamical behavior exhibited by any dynamical system. The procedure provides a systematic approach to correctly extract the baseline vehicle dynamics and the dynamical contributions of the external disturbances. The resulting output can facilitate the design of more robust feedback strategies to stabilize vehicles operating in uncertain environments; this is part of ongoing research. We have showed the practical utility of the proposed approach by developing a suitable model for a hovering quadrotor. The ability to hover, perch, and land when the vehicle operates under the influence of external disturbances, is essential for applications such as Intelligence, Surveillance, and Reconnaissance (ISR), environmental monitoring, and rooftop inspection, to name a few. Acknowledgments This work is supported in part by NSF under grant CMMI-1462825. References 1. Bobadilla, L., Martinez, F., Gobst, E., Gossman, K., LaValle, S.M.: Controlling wild mobile robots using virtual gates and discrete transitions. In: American Control Conference, pp. 743–749 (2012) 2. Fine, B.T., Shell, D.A.: Eliciting collective behaviors through automatically generated environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3303–3308 (2013) 3. Heckman, C.R., Hsieh, M.A., Schwartz, I.B.: Controlling basin breakout for robots operating in uncertain flow environments. In: Hsieh, M.A., Khatib, O., Kumar, V. (eds.) Experimental Robotics. STAR, vol. 109, pp. 561–576. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-23778-7\_​37 CrossRef 4. Hsieh, M.A., Hajieghrary, H., Kularatne, D., Heckman, C.R., Forgoston, E., Schwartz, I.B., Yecko, P.A.: Small and adrift with self-control: using the environment to improve autonomy. In: International Symposium on Robotics Research, Sestri Levante, Italy, September 2015 5. Karydis, K., Poulakakis, I., Sun, J., Tanner, H.G.: Probabilistically valid stochastic extensions of deterministic models for systems with uncertainty. Int. J. Robot. Res. 34(10), 1278–1295 (2015)CrossRef 6. Hall, J., Rasmussen, C.E., Maciejowski, J.: Modelling and control of nonlinear systems using gaussian processes with partial model information. In: 51st IEEE Conference on Decision and Control, pp. 5266–5271 (2012) 7. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)MATH 8. Hofmann, T., Schölkopf, B., Smola, A.J.: Kernel methods in machine learning. Ann. Stat. 36(3), 1171–1220 (2008)MathSciNetCrossRefMATH 9. Ogunfunmi, T.: Adaptive Nonlinear System Identification: The Volterra and Wiener Model Approaches. Signal and Communication Technology. Springer-Verlag, New York (2007)CrossRefMATH 10. Schmidt, M., Lipson, H.: Distilling free-form natural laws from experimental data. Science 324, 81–85 (2009)CrossRef 11. Murphy, K.P.: Machine Learning: A Probabilistic Perspective. The MIT Press, Canbridge (2012)MATH 12. Karhunen, K.: Uber lineare methoden in der wahrscheinlichkeitsrechnung. Ann. Acad. Sci. Fenn. 37, 1–79 (1946) 13. Loeve, M.: Probability Theory. Van Nostrand, Princeton (1955)MATH 14. Hotelling, H.: Analysis of a complex statistical variables into principal components. J. Educ. Psychol. 24(6), 417–441 (1933)CrossRefMATH 15. Kirby, M.: Geometric Data Analysis. Wiley, New York (2001)MATH 16. Johnson, W.: Helicopter Theory. Princeton University Press, Princeton (1980) 17. Powers, C., Mellinger, D., Kushleyev, A., Kothmann, B., Kumar, V.: Influence of aerodynamics and proximity effects in quadrotor flight. In: Desai, J.P., Dudek, G., Khatib, O., Kumar, V. (eds.) International Symposium on Experimental Robotics. STAR, vol. 88, pp. 289–302. Springer, Heidelberg (2012)CrossRef Footnotes 1 The system is modeled using a second-order transfer function, and is controller through a PID controller; for more details see [5].   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_5 Collaborative 3D Reconstruction Using Heterogeneous UAVs: System and Experiments Timo Hinzmann¹  , Thomas Stastny¹  , Gianpaolo Conte²  , Patrick Doherty²  , Piotr Rudol²  , Marius Wzorek²  , Enric Galceran¹  , Roland Siegwart¹   and Igor Gilitschenski¹   (1) Autonomous Systems Lab, ETH Zurich, Zurich, Switzerland (2) Department of Computer and Information Science, Linköping University, Linköping, Sweden     Timo Hinzmann (Corresponding author) Email: timo.hinzmann@mavt.ethz.ch URL: http://www.asl.ethz.ch   Thomas Stastny Email: thomas.stastny@mavt.ethz.ch URL: http://www.asl.ethz.ch   Gianpaolo Conte Email: gianpaolo.conte@liu.se URL: http://www.liu.se   Patrick Doherty Email: patrick.doherty@liu.se URL: http://www.liu.se   Piotr Rudol Email: piotr.rudol@liu.se URL: http://www.liu.se   Marius Wzorek Email: marius.wzorek@liu.se URL: http://www.liu.se   Enric Galceran Email: enric.galceran@mavt.ethz.ch URL: http://www.asl.ethz.ch   Roland Siegwart Email: roland.siegwart@mavt.ethz.ch URL: http://www.asl.ethz.ch   Igor Gilitschenski Email: igor.gilitschenski@mavt.ethz.ch URL: http://www.asl.ethz.ch Abstract This paper demonstrates how a heterogeneous fleet of unmanned aerial vehicles (UAVs) can support human operators in search and rescue (SaR) scenarios. We describe a fully autonomous delegation framework that interprets the top-level commands of the rescue team and converts them into actions of the UAVs. In particular, the UAVs are requested to autonomously scan a search area and to provide the operator with a consistent georeferenced 3D reconstruction of the environment to increase the environmental awareness and to support critical decision-making. The mission is executed based on the individual platform and sensor capabilities of rotary- and fixed-wing UAVs (RW-UAV and FW-UAV respectively): With the aid of an optical camera, the FW-UAV can generate a sparse point-cloud of a large area in a short amount of time. A LiDAR mounted on the autonomous helicopter is used to refine the visual point-cloud by generating denser point-clouds of specific areas of interest. In this context, we evaluate the performance of point-cloud registration methods to align two maps that were obtained by different sensors. In our validation, we compare classical point-cloud alignment methods to a novel probabilistic data association approach that specifically takes the individual point-cloud densities into consideration. Keywords Collaborative UAV mapping missionsPoint-cloud generationVision-laser point-cloud alignmentDelegation of heterogeneouse agents 1 Introduction Field robotics has seen great gains in recent years owing both to robustified robotic platforms and increasing autonomous behaviors and capabilities. In particular, autonomous unmanned aerial vehicles (UAVs) of various classes, utilizing state-of-the-art perceptive sensors and sensing techniques, have proven worth in both large- and small-scale mapping applications, providing a wide array of sensor data. In large-scale mapping scenarios, recent developments in solar-powered, fixed-wing UAV (FW-UAV) technology have enabled extreme long-endurance for low-altitude coverage of vast areas in a compact and hand-launchable form [1]. Finer-resolution mapping on a smaller scale has also been demonstrated using aerial laser scans from autonomous helicopters [2]. Fast and fully autonomous generation of up-to-date maps could potentially be a great advantage for rescue workers looking for missing persons, or in disaster management scenarios, like floods, forest fires, and earthquakes. However, a crucial element to the utility of such operations is the ease of use for, possibly, non-technical operators. Further, no single UAV is a one-fits-all solution for the wide array of sensing data that may be required by end users. In these cases, a robotic team of various actors with mixed, but complementing, capabilities, working together within the context of a collaborative, cognitive framework, on a higher-abstraction, would be particularly impactful. 2 Problem Statement: Collaborative 3D Reconstruction Point-cloud generation from optical cameras on a large scale from small FW-UAVs is sparse, due to, relatively high flying altitudes and limited image resolution. Contrarily, laser point-clouds generated from low-flying autonomous helicopters are dense, but only cover a small area. Merging these two data types together into a single global map has obvious benefits in the sense of real-world search and rescue or disaster management operations, where a large scale (sparse) map could provide operators with coarse information and a means to select “areas of interest” to send agents for a “closer look”. This closer look would provide dense maps of smaller areas which, when merged with the global map, results in a more accurate representation of the environment for both, human operators and collaborating robotic actors. Leveraging the various capabilities of each participating agent in an autonomous manner also requires a higher abstraction of task delegation. In sum, we show a real-world demonstration of distributed, autonomous map making and vision-to-laser point-cloud registration from differing aerial views and mixed sensor data. [] Fig. 1. Two UAV platforms during their cooperative scanning mission. [] Fig. 2. Aerial image of test site near Motala, Sweden. In this context, we employ a novel probabilistic data association method [3] that robustly aligns two maps that were generated by different sensors. Compared to [3], we see the following major contribution: Our data was recorded on different agents, sensor units and flight paths and hence represents a real-world scenario. In contrast, the data presented in [3] was recorded by the same agent and/or even with the same sensor which simplifies the registration process¹. 3 Technical Approach This section opens with an overview of the delegation framework. For more details we refer to the companion paper [4]. The section proceeds with a description of the state estimation, point-cloud generation and concludes with a focus on the point-cloud registration methods. 3.1 Mission Process and Delegation Framework A high-level depiction of the mission process is provided in Fig. 3. The delegation framework [5] which includes delegation modules from each of the participating agents, provides both a formal and software infrastructure for specifying and generating collaborative multi-agent plans to achieve complex goals such as multi-UAV 3D reconstruction of selected regions. Delegation is based on a recursive algorithm that sends requests of the following type, Delegate( Agent1, Agent2, Task, Context ), where agent1 makes an attempt to delegate Task to Agent2, given a specific Context specified as a set of constraints. Examples of constraints would be temporal constraints or restrictions on flight altitudes. Agents can be humans or robots. Tasks are represented using Task Specification Trees (TSTs). TSTs have both declarative and procedural descriptions. Internal nodes in a TST represent control modes such as sequence and concurrency, while leaf nodes represent domain dependent elementary tasks executable by different participating platforms. The delegation process, as illustrated in Fig. 4, itself begins with a goal request TST often provided by a human operator and if successful, results in an expanded TST where all constraints are satisfied. Sub-trees in the final TST are also appropriately allocated to those platforms with the proper capabilities. TSTs can be generated dynamically using automated planning techniques, or by using generic TST templates that can be instantiated appropriately. In this example (Fig. 4), a concurrent scanning plan is generated for one region where two separate sub-regions are covered by each of the UAVs involved, respectively. The scan-map task in the TST calls a region partitioning algorithm to determine appropriate sub-regions for platforms to scan based on their capabilities. The delegation process itself is quite complex and involves auctions, constraint solving and dynamic TST expansion. The scan-map task involves use of partitioning algorithms and the scan-map-single tasks involve internal path planning by the respective platforms. During the mission execution phase, each system executes its part of the mission TST relative to timing and other constraints. [] Fig. 3. Mission process: a human operator broadcasts a goal request for a data acquisition mission via its delegation module. Platforms with available capabilities reply and a delegation process ensues among each of the platforms’ delegation modules. If successful, the net result is a joint plan to execute. Upon execution, raw/processed data can be stored locally or globally. During the mission or upon mission completion the human operator can access the results via specialized interfaces. [] Fig. 4. Goal TST request from operator and generated plan TST involving both RMAX (/rmax0) and FW-UAV (/fw0). Internal nodes: (C) concurrent, (S) sequence. 3.2 State Estimation and Point-Cloud Generation RW-UAV. The state estimation is used both for autonomous navigation and for point-cloud generation by incorporating laser scanner measurements in form of a direct georeferencing technique [6]. It is based on a Kalman filter algorithm which fuses inertial and GNSS position data. The deployed Kalman filter uses a linear state-space error dynamic model derived from a perturbation analysis of the equations of motion [7]. The Kalman filter produces state estimation at 50 Hz rate and performs the update step using GNSS measurement at 20 Hz rate. FW-UAVs. The Pixhawk PX4 auto-pilot performs an indirect EKF-based state estimation as presented in [8]. Within the Kalman filter linear acceleration and angular rates measurements are used for propagation of the system state. Pressure, GPS velocity and position, as well as magnetometer measurements are used for the state update [8]. The estimated states involve the IMU’s attitude and position in WGS84 coordinates². The vision point-cloud was acquired with an optical camera using a classical photogrammetric approach³. 3.3 Point-Cloud Registration The alignment of point-clouds generated from two unmanned aerial vehicles with different sensors involves consideration of the following challenging aspects: Firstly, one point in the visual source point-cloud does in general not correspond to a point in the laser target cloud and vice versa. Secondly, the sensor noise models are different: Peaky for laser but more spread for visual points due to camera noise and triangulation errors. Thirdly, the laser point-cloud is in general denser than the vision point-cloud. Furthermore, the robots fly at high altitudes. Consequently, a dominant ground plane and few depth discontinuities are common for most datasets. Lastly, a rough initial alignment is given by global positioning systems such as GNSS or fused from the state estimator. To register a visual sparse point-cloud to a dense laser point-cloud the following registration algorithms are evaluated with respect to the problem specifications described above: Iterative Closest Point (ICP), Iterative Probabilistic Data Association (IPDA), Generalized Iterative Closest Point (GICP), and Normal Distribution Transform (NDT). [] Fig. 5. Correspondences (grey) for one source point obtained by kd-tree. [] Fig. 6. Residuals for one source point after one iteration. [] Fig. 7. Weights for one source point after one iteration. One iteration of the Probabilistic Data Association [3] approach consists of the following steps: For every point of the source cloud a kd-tree search is performed with maximal radius [$$r\_{kd,}$$] and maximal number of returned neighbours [$$n\_{kd}$$] as shown in Fig. 5. For every source-target correspondence, the residuals and weights are calculated as illustrated in Figs. 6 and 7 respectively employing expectation-maximization (EM) in combination with e.g. a Gaussian or t-distribution. The red line indicates the evolution of the true correspondence residual and weight for 28 Levenberg-Marquardt optimization steps. Note that the data associations do not change during one iteration but only the residuals and weights update based on the iterative solution of the Levenberg-Marquardt algorithm. These steps can be performed iteratively to increase the area of convergence (IPDA). The advantages of this approach relevant to the problem specifications are the following: (1) a sensor model can be intuitively inserted in the EM-algorithm based on the expected noise, (2) a point of the sparse source cloud can hold correspondences to many points in the target cloud. Iterative Closest Point (ICP) [9–12] is a widely used registration algorithm that has inspired many variants. For the evaluation we use the classic point-to-point approach implemented in the Point Cloud Library (PCL). It performs the following steps until convergence: (1) For every point in the source cloud find the closest point in the target cloud. (2) Estimate and apply the transformation [$$\mathbf {T}$$] that best transforms the source cloud to the target cloud in the sense of a mean squared error. Naturally, ICP’s assumption that one point in the source cloud has an exact correspondence in the target cloud is not fulfilled in the sparse-dense registration problem. Generalized ICP (GICP) [13] levers the classic ICP and point-to-plane ICP into a probabilistic framework. Applied to the aerial registration problem, GICP may profit from dominant ground planes due to the high flying altitudes. In the Normal Distribution Transform [14] the points of the cloud are represented in form of a probability distribution and hence no explicit point correspondences between source and target cloud are established. With regard to the sparse-to-dense point-cloud registration, NDT may fail if the visual cloud is too sparse. The parameter notations of the individual methods are presented in Table 1. Table 1. Parameter notations for IPDA, ICP, GICP and NDT. [] 4 Platform Description The platforms used for the experiments include a rotary-wing Yamaha RMAX and the two fixed-wing UAVs named Techpod and senseSoar. 4.1 RW-UAV The Yamaha RMAX helicopter [15], shown in Figs. 1 and 3, has a rotor diameter of 3.1 m, a maximum take-off weight of 94 kg and a payload capability of about 30 kg. The platform is capable of fully autonomous navigation, including take-off and landing. The basic sensor suite used for autonomous navigation includes a fiber optic tri-axial gyro system and a tri-axial accelerometer system, a RTK GNSS positioning system and an infrared altimeter used for automatic landing. Onboard sensors used for mapping missions include color and thermal video cameras, as well as a class 1 SICK LMS511 PRO 2D laser scanner. The laser scanner’s maximum range is 80 m with a maximum scanning FoV of [$$190^{\circ }$$]. 4.2 FW-UAVs Techpod. The small unmanned research plane Techpod is shown in Fig. 1. It has a classic T-tail configuration, is equipped with one propeller, has a wingspan of 2.60 m and a nominal speed of around 12 m/s. The sensor and processing unit [1] as well as the PX4 auto-pilot are located inside the modified fuselage and allow autonomous mission execution such as GPS waypoint following. SenseSoar. The highly versatile solar-UAV senseSoar was developed at the Autonomous Systems Lab for search & rescue missions and has a wingspan of 3.1 m. With its solar panels it is able to generate an electric power of around 140 W and has shown long-endurance capabilities. Likewise as Techpod, senseSoar is hand-launchable and carries the sensor pod inside the fuselage. 5 Experimental Results The datasets were collected at two locations: (1) in Motala, Sweden which includes a flight field with several houses and trees (Figs. 2 and 8). The resulting experiments are presented in Sects. 5.1 and 5.2; (2) at a mountainside in Isollaz, Italy as presented in Experiment III in Sect. 5.3. [] Fig. 8. Sample flight path of the FW-UAV (yellow) for Exp. I and II: The FW-UAV is loitering in-air until it receives the command for scanning the area from the delegation framework. Based on this request, the path-planner located on the ground station generates a scanning pattern which is transmitted to the FW-UAV via telemetry. After execution, the imagery is sent to the ground station via WiFi where the point-cloud is generated. The path of the RW-UAV is plotted in red. The nominal altitude of the FW- and RW-UAV is 100 m and 48 m. 5.1 Experiment I: Complementary Factor of Vision-Laser Point-Cloud Alignment In a first experiment, the vision point-cloud generated with images recorded by a Sony ActionCam HDR-AS100V mounted on Techpod is aligned to the laser point-cloud generated with a SICK laser scanner onboard of the RMAX. Figures 9, 10 and 11 show a satellite image, colored vision and laser point-cloud of the region of interest. Note that the laser did not receive response for parts of the roof and barn due to a steep observation angle, relatively low altitude of the RMAX, and due to non-reflective surfaces. On the other hand, the laser point-cloud contains less measurement noise and a higher level of detail as can be best seen in Fig. 12 which e.g. depicts a wind vane in the top right corner not observed by the visible light camera. These observations underline the complementary factor of the two point-clouds which, when aligned, result in a more complete model of the environment. Figures 12 and 13 illustrate the initial misalignment from side and top view respectively. This georeferencing error is given in Table 2 and consists of a translational offset of several meters and a small rotation. The transformation was obtained by careful manual alignment of the point-clouds and used to evaluate point-cloud registration methods quantitatively. Note that due to the noisy character of the data, this manual alignment should not be considered perfect as slightly varying alignments seem still visually satisfying. Nevertheless, this method allows to reason about convergence and general trends. Figures 14 and 15 show the transformation error for IPDA and ICP plotted over the number of iterations. Both ICP and IPDA converge to almost the same transformation. Furthermore, from the given plots it can be seen that the altitude offset converges first in very few iterations due to the dominant ground planes. The translational offset in x and y usually needs more iterations to converge. Figure 17 shows the aligned vision and laser point-cloud using IPDA. Figure 16(a)–(d) show the final alignments computed by the individual registration methods. The figures and Table 2 illustrate that all methods show convergence, however, small misalignment errors are visible for GICP and NDT. [] Fig. 9. Satellite image as reference. [] Fig. 10. Vision point-cloud. [] Fig. 11. Laser point-cloud colored by height [] Fig. 12. Side-view: vision point-cloud colored by pixel intensity and laser point-cloud in green. [] Fig. 13. Top-view: vision point-cloud colored by pixel intensity and laser point-cloud in green. Table 2. Initial misalignment transformation error and final translational and rotational offset for IPDA, ICP, GICP and NDT. The translation error [$$e\_{trans}$$] and rotation error [$$e\_{rot}$$] are computed as proposed in [12]. [] [] Fig. 14. IPDA: rotational (red) and translational (blue) misalignment error. [] Fig. 15. ICP: rotational (red) and translational (blue) misalignment error. [] Fig. 16. (a)–(d) illustrate that all registration methods converged, however, small misalignment errors are observable for GICP and NDT depicted by the red rectangles. [] Fig. 17. Aligned vision and laser point-cloud (green) using IPDA. The experiment underlines the complementary factor of employing laser and vision to obtain a more complete model of the environment. 5.2 Exp. II: Changes in the Environment This experiment evaluates if agents that possess only poor absolute position sensing capabilities can register to an a-priori obtained and well georeferenced point-cloud. This evaluation gives an idea of how well the different methods can deal with changes in the environment as well as about their region of convergence. [] Fig. 18. Point-cloud generated by the laser scanner mounted on the RW-UAV. The point-cloud consists of 568’839 points and is colored by height. [] Fig. 19. Point-cloud generated by the RGB camera mounted on the FW-UAV. The house area shown in the top consists of 163’595 points. [] Fig. 20. Initial misalignment and final registration using IPDA with t-distribution. [] Fig. 21. IPDA: translation (blue) and rotation error (red; multiplied by 1e2 for better visualization). [] Fig. 22. ICP: translation (blue) and rotation error (red; multiplied by 1e2 for better visualization). For this purpose, we align the vision point-cloud shown in Fig. 19 to the previously generated laser point-cloud given in Fig. 18. Several changes in the environment can be spotted, in particular, the vegetation, location of cars and of a small house. Furthermore, we generate a random large initial misalignment error between both point-clouds as shown in Table 3. Figures 20, 21 and 22 as well as Table 3 demonstrate that IPDA, in particular when employing t-distribution, results in the lowest final misalignment error, followed by GICP and ICP, whereas NDT diverged for this scenario. Table 3. Initial misalignment transformation error and final translational and rotational offset for IPDA, ICP, GICP and NDT. [] [] Fig. 23. The satellite image shows the flight path of the fixed-wing UAV in yellow and of the RMAX helicopter in red. The plots on the right show the altitude (top: RW-UAV, bottom: FW-UAV) in form of height above ground with respect to the individual starting positions. [] Fig. 24. One of the laser strips to be aligned to the vision point-cloud. [] Fig. 25. Point-cloud generated by the grayscale camera mounted on the FW-UAV. [] Fig. 26. The dense laser point-cloud is shown in blue. The prior misalignment is especially visible at the hill gradients. The tree region was only partially captured by the FW-UAV and could be densified by the laser scan. 5.3 Exp. III: Point-Cloud Sparsity In Experiment III, we present a very challenging dataset consisting of a tree region with few man-made structures. The mission procedure is illustrated in Fig. 23: The FW-UAV, equipped with a Sony ActionCam and a grayscale Aptina MT9v034 camera, generates a rough initial point-cloud as soon as it receives the command from the delegation framework initiated by the human operator. Subsequently, the RW-UAV scans the region of interest with the aid of the SICK laser for refinement. The generated laser and vision point-clouds are shown in Figs. 24 and 25 respectively. We deliberatively use the point-cloud generated by the low-resolution grayscale camera for point-cloud registration and employ IPDA to underline its performance when dealing with dense and sparse point-clouds. In contrast to Experiment I and II, the images are geo-registered by the onboard EKF instead of using the raw GPS measurements. As expected, the initial misalignment error is limited to 3.34, [$$-2.51$$], [$$-0.26$$] m for the translational and [$$-0.016$$], 0.0082, 0.0089 rad for the rotational offset. The initial misalignment and final registration are shown in Fig. 26. 6 Conclusion In this paper, we presented an automated delegation framework that translates top-level commands of the human operator into low-level commands of the employed agents. We validated the framework based on realistic scenarios in two locations including more than 20 flights using a RW- and FW-UAV representing an arbitrary fleet of heterogenous agents. Furthermore, we chose the task of scanning a common area as one exemplary mission of the delegation framework. The point-clouds acquired during this scanning process are automatically registered and transferred back to the human operator and visualized in the dynamic cognitive map. Our experiments show the complementary factor of vision-laser point-cloud registration from aerial views and demonstrate the successful deployment of the Probabilistic Data Association algorithm. The final goal of this project will be to allow accurate path planning of unmanned ground vehicles (UGV) or smaller multicopter UAVs based on the aligned map and, for instance, to delegate them inside the buildings’ interior. Future work will also include the integration of a previously presented human detection algorithm [16] into the delegation framework. The algorithm returns the UTM location of possible victims along with their detection uncertainties. Other agents may verify these possible human detections to decrease the false alarm rate. Acknowledgment The research leading to these results has received funding from the European Commission’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n[$$^\circ $$]285417 (ICARUS) and n[$$^\circ $$]600958 (SHERPA). Furthermore, the authors want to thank Gabriel Agamennoni and Simone Fontana for providing an initial implementation of the probabilistic data association algorithm. References 1. Oettershagen, P., Stastny, T., Mantel, T., Melzer, A., Rudin, K., Gohl, P., Agamennoni, G., Alexis, K., Siegwart, R.: Long-Endurance Sensing and Mapping Using a Hand-Launchable Solar-Powered UAV. In: Wettergreen, D.S., Barfoot, T.D. (eds.) Field and Service Robotics. STAR, vol. 113, pp. 441–454. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-27702-8\_​29 CrossRef 2. Conte, G., Rudol, P., Doherty, P.: Evaluation of a light-weight lidar and a photogrammetric system for unmanned airborne mapping applications. PFG Photogrammetrie, Fernerkundung, Geoinformation 2014, 287–298 (2014)CrossRef 3. Agamennoni, G., Fontana, S., Siegwart, R.Y., Sorrenti, D.G.: Point clouds registration with probabilistic data association. In: Proceedings of the International Conference on Intelligent Robots and Systems. IEEE (2016) 4. Doherty, P., Kvarnström, J., Rudol, P., Wzorek, M., Conte, G., Berger, C., Hinzmann, T., Stastny, T.: A Collaborative Framework for 3D Mapping Using Unmanned Aerial Vehicles. In: Baldoni, M., Chopra, A.K., Son, T.C., Hirayama, K., Torroni, P. (eds.) PRIMA 2016. LNCS (LNAI), vol. 9862, pp. 110–130. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-44832-9\_​7 CrossRef 5. Doherty, P., Heintz, F., Kvarnström, J.: High-level mission specification and planning for collaborative unmanned aircraft systems using delegation. Unmanned Syst. 1(1), 75–119 (2013)CrossRef 6. Skaloud, J.: Optimizing georeferencing of airborne survey systems by INS/DGPS. Ph.D. thesis, University of Calgary, Alberta (1999) 7. Britting, K.R.: Inertial Navigation Systems Analysis. Wiley, New York (1971) 8. Leutenegger, S., Melzer, A., Alexis, K., Siegwart, R.: Robust state estimation for small unmanned airplanes. In: 2014 IEEE Conference on Control Applications (CCA), pp. 1003–1010, October 2014 9. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992)CrossRef 10. Chen, Y., Medioni, G.: Object modelling by registration of multiple range images. Image Vis. Comput. 10, 145–155 (1992)CrossRef 11. Zhang, Z.: Iterative point matching for registration of free-form curves and surfaces. Int. J. Comput. Vis. 13, 119–152 (1994)CrossRef 12. Pomerleau, F., Colas, F., Siegwart, R., Magnenat, S.: Comparing ICP variants on real-world data sets. Auton. Robots 34, 133–148 (2013)CrossRef 13. Segal, A., Haehnel, D., Thrun, S.: Generalized-ICP. In: Proceedings of Robotics: Science and Systems, Seattle, USA, June 2009 14. Biber, P., Strasser, W.: The normal distributions transform: a new approach to laser scan matching. In: Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2003), vol.3, pp. 2743-2748 (2003). doi:10.​1109/​IROS.​2003.​1249285 15. Doherty, P., Kvarnström, J., Wzorek, W., Rudol, P., Heintz, F., Conte, G.: HDRC3 - a distributed hybrid deliberative/reactive architecture for unmanned aircraft systems. In: Valavanis, K.P., Vachtsevanos, G.J. (eds.) Handbook of Unmanned Aerial Vehicles, pp. 849–952. Springer, Heidelberg (2014) 16. Vempati, A.S., Agamennoni, G., Stastny, T., Siegwart, R.: Victim detection from a fixed-wing UAV: experimental results. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Pavlidis, I., Feris, R., McGraw, T., Elendt, M., Kopper, R., Ragan, E., Ye, Z., Weber, G. (eds.) ISVC 2015. LNCS, vol. 9474, pp. 432–443. Springer, Heidelberg (2015). doi:10.​1007/​978-3-319-27857-5\_​39 CrossRef Footnotes 1 Due to space constraints in this publication, only a subset of the data can be presented. However, the datasets can be requested from the authors.   2 For more details about the state estimation framework we refer to [8].   3 The vision point-clouds are generated using the commercial software pix4d.   Actuation © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_6 A Modular Folded Laminate Robot Capable of Multi Modal Locomotion Je-sung Koh¹  , Daniel M. Aukes^(1, 2), Brandon Araki³, Sarah Pohorecky³, Yash Mulgaonkar⁴, Michael T. Tolley⁵, Vijay Kumar⁴, Daniela Rus³ and Robert J. Wood¹ (1) John A. Paulson School of Engineering and Applied Sciences and the Wyss Institute for Biologically Inspired Engineering, Harvard University, Cambridge, USA (2) The Polytechnic School, Fulton Schools of Engineering, Arizona State University, Tempe, USA (3) CSAIL, The Stata Center, Massachusetts Institute of Technology, Cambridge, USA (4) The GRASP Laboratory, University of Pennsylvania, Philadelphia, USA (5) Department of Mechanical and Aerospace Engineering, University of California, San Diego, USA     Je-sung Koh Email: jskoh@seas.harvard.edu Abstract This paper describes fundamental principles for two-dimensional pattern design of folded robots, specifically mobile robots consisting of closed-loop kinematic linkage mechanisms. Three fundamental methods for designing closed-chain folded four-bar linkages – the basic building block of these devices – are introduced. Modular connection strategies are also introduced as a method to overcome the challenges of designing assemblies of linkages from a two-dimensional sheet. The result is a design process that explores the tradeoffs between the complexity of linkage fabrication and also allows the designer combine multiple functions or modes of locomotion. A redesigned modular robot capable of multi-modal locomotion and grasping is presented to embody these design principles. Keywords Printable robotsFolded laminate devicesMulti-modal locomotion 1 Motivation, Problem Statement and Related Work Folding-based mechanisms inspired by origami have recently become popular for rapid design iterations for complex structures and mechanisms. These methods are low-cost and inherently accessible, providing non-experts access to robot design [1]. In addition, the scalability of folding-based concepts enables new ways to create robots over size scales from micro to meso scales [2–4]. The potential of folding as an assembly method has been demonstrated in education [5] and in research exploring biological hypotheses in bio-mimetic robots [6–8]. One of the primary advantages of folded robot design is that the inherent accessibility and low cost of the method permits designers to get design feedback early and often via fast prototyping cycles. While prototyping is relatively fast, the complexity of multi-layer articulated designs can be time-consuming and unwieldy to design; efforts have recently been made to develop computer-aided design software and an integrative design methodology to make the design and manufacture of foldable robots more convenient for non-experts [9, 10]. Consequently, the development environment for the design and fabrication of folding-based mechanisms has advanced significantly. Manufacturing principles for laminates [10] and design principles for folding [11] have been abstracted mathematically, permitting the automated merging of complex fold patterns [12, 13], and advancing the development of various innovative fabrication methods, such as self-folding [14] and printed-circuit MEMS (PC-MEMS) [15]. In this paper, we present an intuitive design methodology to map a 3D kinematic linkage mechanism to a 2D fold pattern design. The robot design starts with drawing lines and points to specify an abstract linkage mechanism, as in conventional machine design; however, the lines and points are then transferred to a 2D folding pattern by using one of three fundamental methods. A folded laminate crawler is then built by using the proposed method. Experiments on the crawler show the inherent compliance of a folding-based robot. A modular design concept for the folded robot is described to reduce design complexity and enhance convenience in combining more than two functions or modes of locomotion. This work builds on the previous “Flying Monkey” robot [16] by a simplifies the design to a single folded sheet and introduces modularity of components and appendages to streamline the customization of developing a low-cost multi-modal robot. 2 Folding Pattern Design Robots, particularly legged robots, often have kinematic linkages composed of links and various types of mechanical joints that map an actuator or motor input to a desired motion. The mechanism can be described by lines and points that correspond to links and joints in an analogous way to the axes of rotation and coordinate frames of the Devavit-Hartenberg convention. In mechanism design, designers generally start with drawing lines and points to create an abstract schematic diagram of the desired linkage. After drawing the schematic diagram, a type of joint, such as a revolute, prismatic, or spherical joint, is defined at each point to complete a mechanism. This step determines the kinematics of the mechanism, such as the range of motion and degrees of freedom (DOF). In converting this abstract schematic to a 2D fold pattern, revolute joints that correspond to pin or hinge joints in conventional mechanical designs are transformed to fold lines connecting two facets with a single degree of freedom. The fold line and the facet are the basic elements of the folding mechanism. The pattern design consists of placing these basic elements at links and joints of the schematic diagram on a 2-dimensional plane that allows them to be folded into the three-dimensional mechanism. These two corresponding elements in folding pattern design and mechanical design - the fold line and the revolute joint - enable a designer to transform a schematic diagram into a folding pattern design intuitively. As the complexity of a mechanism increases, however, the pattern design becomes non-intuitive, and the resulting folded shape and motion of the mechanism become difficult to visualize. [] Fig. 1. Pattern designs for a four-bar mechanism. (a) Schematic diagram of a four-bar mechanism and its range of motion. (b) The four-bar mechanism consists of revolute joints and links in three dimensions for conventional machine design. (c) The multi-laminate (d) joining two ends of a single serial link and (e) the joint offset design. 3 Technical Approach and Results 3.1 Fold Pattern Design for Closed Chain Linkages Most linkages consist of closed chains that generate a desired motion from actuators which have a limited range of motion. The design of closed (parallel) chains can often be more challenging than for open (serial) chains, due to the additional constraints imposed by flat laminate fabrication methods. A closed mechanism assembled in its flat state loses design freedom by the fact that link vectors must sum to zero during planar fabrication steps. Here we propose three classes of intuitive approaches to designing closed-chain folding mechanism; these approaches can be applied to general folding pattern design with this known tradeoff. To illustrate the application of the three design methods, we will describe how they can be applied to the folding pattern design of a four-bar linkage mechanism. A four-bar linkage mechanism is a simple, single-degree-of-freedom planar mechanism with few canonical design variables, often employed in robotic applications [4, 15]. Four-bar linkages with equal-length opposite links maintain parallelism between these links while permitting rotation of the interior angles, as shown in Fig. 1(a), and are often used as transmissions to transform linear reciprocating motion into rotational motion or vice-versa. As in Fig. 1(b), this can be depicted in an abstract three dimensional perspective. In a two-dimensional design, however, a significantly different approach to draw this kinematic diagram is needed because it is derived from flat surfaces. The first method consists of stacking and joining layers, which is called the “multi-laminates” method as shown in Fig. 1(c). By joining facets at each end of the structure, the folding mechanism becomes a closed chain when folded as depicted in the right side of Fig. 1(c). The second method requires joining the ends of a single serial link as shown in Fig. 1(d). This permits individual links to be of any arbitrary length, however a manual assembly step is required to join the two ends before folding. In addition, a much larger machining area (on a single layer) is used compared to the first method. These two methods can be designed intuitively. However, the third strategy – the “joint offset” method – can be less intuitive. In this case, links are arranged beside each other (rather than on top of each other) in order to permit links to be cut from a single layer of material. In other words, links which would normally overlap are shifted along their rotational axes until they are neighboring facets in a single sheet of material, as shown in Fig. 1(e). This method allows the 2D pattern design of the closed chain to be designed in a single plane without post-processing. Almost all kinematic linkage mechanisms can be designed by a combination of these three methods, although the angle and length of links allow variation in the shape of a folding pattern. 3.2 Single Sheet Crawler Design To make a multi-legged crawling robot, we designed a closed chain linkage that has two primary links - interior and exterior frames - which are linked by four serial linkages as shown in Fig. 2(a). This type linkage has been the basis of many previous crawling robots that were built with folded laminate mechanisms [3, 4, 16]. In this paper, two spherical joints are employed in four serial leg links to generate the rotational motion of feet that will be attached under the middle link. The motor rotates between the interior link and the exterior link with a crank shaft as shown in Fig. 3. For patterning on a single sheet, the exterior linkage is cut and the linkage is deployed and flattened on a plane as shown in Fig. 2(b). After fabrication of the folding laminate, the robot mechanism is finished by joining the exterior linkage. The task of joining (i.e. gluing) is simple and intuitive but it becomes onerous in complex mechanisms. Currently, many technique can be applied such as a solder fillet locking [17], a shape memory alloy riveting [18]. [] Fig. 2. (a) Kinematic diagram for the crawler with four legs. (b) Flattened mechanism after cutting the exterior link. (c) Transforming from lines and points to facets and fold lines. (d) Generating detailed cut lines by placing 2D patterns of mechanical elements. (e) The single foldable laminate for the crawler. (f) Folded laminate crawler after assembly After sketching the mechanism diagram, links and joints are transformed into facets and fold lines intuitively, as shown in Fig. 2(c), and the detailed shape of the fold pattern is determined. The square shape is the basic unit cell, because all links and joints are perpendicular to each other in this case, but other kinds of polygons can be used, as a unit cell depends on the relative direction of joints and links. The spherical joints are marked as single lines at this step, but they will be replaced by a spherical six-bar pattern which has three DOF (pitch, roll, and yaw) motion between two opposite links [19]. [] Fig. 3. Leg motion of the folded laminate crawler. 3.3 Fabrication Given the desired 2D patterns for the linkage elements, the detailed cut patterns can be drawn using computer-aided design (CAD) software. PopupCAD [20] has been developed as a convenient user interface for designing the folded-laminate devices. The kinematic pattern design (Fig. 2(c)) is drawn in Solidworks (Dassault Systems Co.). By importing the design into PopupCAD, the detailed 2D pattern elements are placed in corresponding locations as shown in Fig. 2(d). In the crawler design, the spherical joints and four-bar linkages are added in the kinematic mechanism. The spherical six-bar linkages are placed as spherical joints that have three DOFs. In addition, a serial four-bar linkage as shown in Fig. 2(d) is added at the green dotted line in the main mechanism to constrain the final mechanism to 2-DOF rotational motion. The 2D pattern design of the serial four-bar linkage is a serial connection of two four-bar linkages that is drawn by the joint-offset method described in Fig. 1(e). Each end of the serial four-bar linkage is attached at the interior and the exterior link, and the relative motion between the two links is constrained into parallel circular motion. The manufacturing process for the folding laminate is Printed-Circuit MEMS (PC-MEMS) (Fig. 2(e)–(f)) [10, 15]. The detailed cut lines for film layers are generated by PopupCAD automatically while verifying the manufacturability of each layer. The folding laminate in this paper has five layers: a rigid layer, an adhesive layer, a flexure layer, an adhesive layer, and a rigid layer. The right-hand side of Fig. 2(d) is the final five-layer design of the crawler. Each layer is cut by laser machining and laminated by a thermal laminator. After the initial cut, each layer has alignment pin holes in support parts which are cut out during the final cut. Figure 2(e) shows the laminate after the final cut. The laminate is assembled by joining the exterior link and each end of serial four-bar linkages as shown in Fig. 2(f). Finally, the four legs, are attached; the legs can be changed to suit the terrain the robot will be crawling over. The legs can be added in the laminate design but separated in order to test various leg designs easily. [] Fig. 4. (a) Plot tracking the foot at 21 Hz (upper) and 52 Hz (lower). (b) Crawling speed versus motor driving speed. 3.4 The Folded Laminate Crawler The crawler is driven by a DC motor with a crank shaft. The motor is mounted on the interior link and the crank shaft is connected to the exterior link. The interior link and the exterior link rotate with respect to each other and the leg links between the two frames swing in a rowing motion as shown in Fig. 3. In this case, the rotation radius of the crank shaft is 1.5 mm, the length of the leg link is 6.5 mm, and the legs are 18.5 mm. By kinematic calculation, the stride length and the swing angle of the legs are 7.7 mm and 25[$$^{\circ }$$]. However, the performance of the crawler varies depending on the driving speed (i.e., rotational frequency of motor). As the driving frequency increases, both speed and stride length increase as shown in Fig. 4. However, unfortunately, the crawler starts to flip over in high frequencies above 52 Hz and the speed and the stride length drop at 62 Hz. Experimental results are listed in Table 1. Table 1. Performance of the crawler +---------------------------------+-------+-------+-------+ |   | 21 Hz | 52 Hz | 65 Hz | +:================================+:======+:======+:======+ | Designed speed [cm/s] | 16 | 40 | 50 | +---------------------------------+-------+-------+-------+ | Experimental speed [cm/s] | 7 | 34 | 32 | +---------------------------------+-------+-------+-------+ | Designed stride length [mm] | 7.7 | 7.7 | 7.7 | +---------------------------------+-------+-------+-------+ | Experimental stride length [mm] | 3.6 | 6.6 | 5 | +---------------------------------+-------+-------+-------+ Folded laminate devices are inherently compliant because the links are sheets which have a low bending stiffness induced by a low area moment of inertia. Therefore, the speed and the stride length changes based on the driving frequency. Figure 5 shows the tracking data and its frequency spectrum for the crank shaft and the foot driving in air. The foot is more sensitive to driving frequency than the crank shaft, and the range of fluctuation in vertical motion is much larger than the horizontal motion. This may be caused by the limited horizontal range of motion in the spherical six-bar linkages which are used as the spherical joints. The compliance of the structure may reduce precision and linearity of the system and cause difficulties in control. [] Fig. 5. Trajectory of the shaft and the foot in air (upper) and frequency response of the trajectories (lower). [] Fig. 6. (a) Kinematic diagram for the adaptive gripper. (b) 2D pattern design for the gripper. (c,d) Gripping force experiments demonstrating adaptive gripping in various size of objects form 2 mm to 12 mm in diameters (a–f, 2 mm increments). 3.5 Adaptive Gripper The gripper design is adapted from the Festo adaptive gripper [21]. To decrease the size of the gripper, the minimum number of links are used, as in the kinematic diagram in Fig. 6(a). Using the pattern design method describe in Sect. 2, the kinematic linkage is flattened and transformed into a 2D pattern as shown in Fig. 6(b). The closed linkage can be made by joining each end of the serial links; the links inside of the closed link are positioned by an offset joint in the middle of facets. The gripper is opened by a coiled shape memory alloy actuator placed at the middle links as indicated by the red coil in Fig. 6(a) and closed by the passive plate spring indicated by the blue curved links in Fig. 6(a) and two blue facets in Fig. 6(b). Therefore, the gripper grips an object passively using the spring, so that it is not required to apply a force to hold the object. The adaptive gripper grasps the object by bending two internal links as shown in Fig. 6(d). Each segment of fingers is zero-DOF. The compliance of the sheet link enables adaptable gripping with various size of objects as shown in Fig. 6(c). 3.6 Modularization with Magnetic Connections Due to the limited work space for the 2D pattern design and complexity caused by having multiple components in a single multi-functional device, a modularized design is a simpler and more effective way to design folded laminate devices. The system can be modularized into mechanisms, motors, accessories, and interfaces in a high level design. Each module can be designed separately but with a common connection/mounting protocol that allows the modules to be combined and recombined very easily as described in Fig. 7. In this paper, a gripper module and a crawler module are described and assembled with a magnetic mount which is capable of automatic positioning and provides electric power connections. In addition, motors and other interfaces such as to connect to a quad-rotor can be modularized for pick-and-place functionality. [] Fig. 7. Schematic diagram of the modular multi-modal robot assembled by magnetic connectors for mechanical and electrical connection. a: quadrotor, b: magnetic mount for the quadrotor, c: crawler, d: motor, e: a gripper accessory, e’: antenna accessory. [] Fig. 8. Modular folded laminate crawler. (a) Gripper accessory module, crawler interface module, motor module with magnetic mounts (dotted lines) and (b) Crawler with the motor module. (c) Crawler with the motor and the gripper modules and (d) the antenna module. (e) Crawler with the gripper accessory and the quad-rotor interface module for multi-modal locomotion, flying, gripping, and crawling. In contrast to the philosophy of origami, in which shapes are folded from a single sheet of paper, a multi-laminate and modularized design may reduce the complexity of a multifunctional robot by making it easier to join components together while also expanding the design space and feasible functions of the foldable robot. As shown in Fig. 8, the adaptive gripper accessory module, the crawler interface module, and the motor modules are manufactured separately. They have magnetic mounts whose poles are synchronized with each other. Red dotted lines circle the magnetic mount for the gripper and black dotted lines circle the magnetic mount for the motor. When the modules are placed close to their positions, the magnets pull each other and attach to their mounts (forming both electrical and mechanical connections) automatically. Two pairs of Neodymium magnets (2 mm in diameter, 1.6 mm in thickness) are used for the magnetic mount and the pulling load is 180 g each and 360 g as a pair. The magnetic force is much higher than the crawler’s weight (4.5 g) and it is hard to break the connection while it moves. [] Fig. 9. The Flying Monkey robot moves with a feedback control in a motion capture arena.(a) Controlled crawling and gripping. The robot following the straight line (dotted-line). (b) Controlled flight. To be capable of multiple functions and multi-modal locomotion, various accessory and interface modules can be attached by pick-and-place. As an accessory module, the active antenna shown in Fig. 8(d) steers the crawler by changing its center of mass. As an interface module for multimodal locomotion, a small quad-rotor, an open-source drone commercially available (Crazyflie 2.0 [22]), can drastically improve the range and speed of the robot as shown in Fig. 8. 4 Results Intuitive folding patterns are directly transformed from the kinematic linkage diagrams that are used as conceptual designs in conventional machine design. In this paper, a folded laminate crawler and a adaptive gripper are designed on a single sheet using three fundamental methods for designing a closed chain linkage mechanism in 2D space. The inherent compliance of the folding structure causes the folded laminate crawler to exhibit resonant behavior. The concept of modular design may reduce complexity in the folding pattern design by modularization of interfaces for multimodal locomotion, accessories for additional functions, and actuators. To demonstrate multi-modal functionality, the robot performs specific tasks by flying, gripping, and crawling. To reduce complexity of modules in terms of the number of actuators, each module has a single actuator which needs just a single control and power input. For this reason, crawler can crawl only forward and not steer. However, the quadrotor can introduce additional torques, so that the robot can be controlled by the microcontroller on quadrotor. Figure 9 shows the robot crawling straight, gripping, and hovering controlled by the quadrotor in a motion capture arena. A similar ground locomotion control scheme was employed by the original Flying Monkey [16]. 5 Discussion To simplify the design and fabrication of robots, foldable design methods must be further abstracted and automated. The design methods presented in this paper may not be applicable to specific cases such as a robot capable of transporting high payloads. Static and kinematic requirements should be addressed in design, for example by inserting extra folding links to improve the robustness of linkages based on analytical or numerical models of the device. Lastly, the selection of proper actuation and power delivery methods for modular robots is required to finalize the robotic system for operation in the field. Acknowledgment This research was supported by the National Science Foundation (EFRI-1240383 and CCF-1138967) and the Wyss Institute for Biologically Inspired Research. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the National Science Foundation. References 1. Bezzo, N., Mehta, A., Onal, C.D., Tolley, M.T.: Robot makers: the future of digital rapid design and fabrication of robots. IEEE Robot. Autom. Mag. 22(4), 27–36 (2015)CrossRef 2. Wood, R., Avadhanula, S., Sahai, R., Steltz, E., Fearing, R.: Microrobot design using ber reinforced composites. J. Mech. Des. 130(5), 052304 (2008)CrossRef 3. Hoover, A.M., Steltz, E., Fearing, R.S.: Roach: an autonomous 2.4g crawling hexapod robot. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008, pp. 26–33 (2008) 4. Birkmeyer, P., Peterson, K., Fearing, R.S.: Dash: a dynamic 16g hexapedal robot. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2009, pp. 2683–2689. IEEE (2009) 5. Cybulski, J.S., Clements, J., Prakash, M.: Foldscope: origami-based paper micro-scope. PLoS ONE 9(6), 1–11 (2014)CrossRef 6. Ma, K.Y., Chirarattananon, P., Fuller, S.B., Wood, R.J.: Controlled fight of a biologically inspired, insect-scale robot. Science 340(6132), 603–607 (2013)CrossRef 7. Koh, J.S., Yang, E., Jung, G.P., Jung, S.P., Son, J.H., Lee, S.I., Jablonski, P.G., Wood, R.J., Kim, H.Y., Cho, K.J.: Jumping on water: surface tension-dominated jumping of water striders and robotic insects. Science 349(6247), 517–521 (2015)CrossRef 8. Jayaram, K., Full, R.J.: Cockroaches traverse crevices, crawl rapidly in connedspaces, and inspire a soft, legged robot. Proc. Nat. Acad. Sci. 113(8), E950–E957 (2016)CrossRef 9. Mehta, A., DelPreto, J., Rus, D.: Integrated codesign of printable robots. J. Mech. Robot. 7(2), 021015 (2015)CrossRef 10. Aukes, D.M., Goldberg, B., Cutkosky, M.R., Wood, R.J.: An analytic framework for developing inherently-manufacturable pop-up laminate devices. Smart Mater. Struct. 23(9), 094013 (2014)CrossRef 11. Demaine, E.D., O’Rourke, J.: Geometric Folding Algorithms. Cambridge University Press, Cambridge (2007)CrossRefMATH 12. Sung, C., Demaine, E.D., Demaine, M.L., Rus, D.: Joining unfoldings of 3-d surfaces. In: ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, American Society of Mechanical Engineers, V06BT07A033–V06BT07A033 (2013) 13. Onal, C., Tolley, M., Wood, R., Rus, D.: Origami-inspired printed robots. IEEE/ASME Trans. Mechatron. 20(5), 2214–2221 (2015)CrossRef 14. Felton, S., Tolley, M., Demaine, E., Rus, D., Wood, R.: A method for building self-folding machines. Science 345(6197), 644–646 (2014)CrossRef 15. Whitney, J.P., Sreetharan, P.S., Ma, K.Y., Wood, R.J.: Pop-up book mems. J. Micromech. Microeng. 21(11), 115021 (2011)CrossRef 16. Mulgaonkar, Y., Araki, B., sung Koh, J., Guerrero, L., Aukes, D.M., Makineni, A., Tolley, M.T., Rus, D., Wood, R.J., Kumar, V.: The fying monkey, a multifunctional mesoscale robot that can run, and grasp. In: IEEE International Conference on Robotics and Automation (ICRA) (2016) 17. Sreetharan, P.S., Whitney, J.P., Strauss, M.D., Wood, R.J.: Monolithic fabrication of millimeter-scale machines. J. Micromech. Microeng. 22(5), 055027 (2012)CrossRef 18. Kim, J.-S., Lee, D.-Y., Koh, J.-S., Jung, G.-P., Cho, K.-J.: Component assembly with shape memory polymer fastener for microrobots. Smart Mater. Struct. 23(1), 015011 (2014)CrossRef 19. Koh, J.S., Cho, K.J.: Omega-shaped inchworm-inspired crawling robot with large-index-and-pitch (LIP) SMA spring actuators. IEEE/ASME Trans. Mechatron. 18(2), 419–429 (2013)CrossRef 20. Popupcad. http://​www.​popupcad.​org/​ 21. Festo. https://​www.​festo.​com/​cms/​encorp/​9779.​htm 22. Crazyflie 2.0. https://​www.​bitcraze.​io/​ © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_7 Combined Energy Harvesting and Control of Moball: A Barycentric Spherical Robot Joseph Bowkett¹  , Matt Burkhardt¹   and Joel W. Burdick¹   (1) California Institute of Technology, Pasadena, California 91125, USA     Joseph Bowkett (Corresponding author) Email: jbowkett@caltech.edu URL: http://robotics.caltech.edu/   Matt Burkhardt URL: http://robotics.caltech.edu/   Joel W. Burdick URL: http://robotics.caltech.edu/ Abstract The mobile sensor platform Moball uses an array of sliding magnets and solenoids inside a spherical shell to both harvest energy and displace its center of mass or barycenter from its center of rotation in order to control the path along which it rolls. Previous simulations of the harvesting potential for the complete system are validated experimentally, and certain phenomena that restrict effective operating conditions for energy harvesting are investigated. Tracking of characteristic trajectories for a single mass control element is used to assess the performance of the solenoids as actuators, and the ability of the system to generate a control torque during motion is demonstrated. Keywords Barycentric spherical robotMoballSliding massSolenoid actuatorEnergy harvestingLinear position control 1 Background Moball (see Fig. 1) is intended to serve as a low cost wind-driven sensor platform that could be deployed in numbers to form a distributed network for taking scientific measurements over a large geographic area [1]. The Earth’s polar ice caps are of particular interest to scientists as perennial Arctic sea ice has been shown to be steadily receding, and predictive models for how this is occurring rely on data such as temperature variation, atmospheric composition, and wind velocity at ground level [2]. Most previous efforts to take such readings over large areas use remote sensing techniques such as radar backscatter signatures, but these relative methods still require absolute references of readings on the ground for accurate analysis [3]. The task of collecting such ground-truth measurements is well suited to a Moball network as each device would contain the necessary gas meters, thermometers, and other sensors, as well as the means to broadcast readings across the network to base stations or overhead satellites. The windswept polar regions would also provide ample propulsive power for each system to generate and store electricity to power sensors and control its path year round, as compared to solar panel reliant systems that are inoperable during winter months. [] Fig. 1. Overview of Moball structure with electromagnetic components Mars and Titan are other scientific targets where Moball’s ability to scavenge wind energy could allow long duration climate measurements in domains where sufficient solar energy is not realizable. Passive versions of a spherical rolling sensor platform such as the NASA/JPL Tumbleweed have previously been developed for use in exploring Antarctic regions or Mars [4], but as these lack any motion control they are steered solely by wind currents which may lead them away from useful measurement locations. Moball’s unique combination of energy harvesting and barycentric control is well suited for long duration missions in these example domains. The Moball platform consists of 6 solenoidal tubes arranged radially around the center of a spherical shell composed of a nylon cladding surrounding curved fiber-glass battens that arc between the poles as shown in Figs. 1 and 2 right. Each tube has a number of solenoid coils spaced along its length and contains a specially designed mass which consists of a rare earth magnet (NdFeB) between two purpose shaped “back irons”. These back irons serve to redirect the magnetic field outwards from the axis of the tube increasing inductive coupling with the coil and maximizing harvested energy [5]. The system is termed ‘barycentric’ because it induces motion by displacing its center of mass from its center of rotation by simultaneously coordinating the position of all 6 magnets. While the wind will provide the majority of propulsive force, barycentric control allows the vehicle to be steered in useful directions. 2 Energy Harvesting When the ball moves as a result of external forces such as wind or gravity the magnets in each tube pass through their respective coils converting mechanical work into electrical energy which can be scavenged in order to power sensor electronics or be stored in batteries. This section seeks to validate simulations of the energy harvesting potential for the complete Moball system, and investigate any unmodeled factors that influence the maximum power that can be generated. 2.1 Design of Electromagnetic Components and Previous Results Previous work on Moball sought to maximize the energy that can be harvested from the system by optimizing coupling between the magnet and coils as well as configuration of the coils along each tube. The FEM software JMAG was used to simulate inductive coupling for different configurations of permanent magnets and field shaping “back irons” finding the arrangement pictured in Fig. 1 most effective [5], using two coils equispaced 26 cm from the center of the 70 cm long tube. Simulations and benchtop experimental results estimated that one coil on one of the six tubes in the system can generate a peak of 1.05 W when Moball rotates at 19rpm which is realizable in Arctic winds, with a load resistance of [$$40{\Omega }$$] as seen in Fig. 2 left. A relationship was also found between the coil’s peak power output for a given rotational velocity and the circuit’s load resistance which influences the mechanical damping experienced by the magnet. Springs at the end of each tube allow some of the kinetic energy lost when the magnet reaches the end of each tube to be recovered, which extends the energy recovery capability of Moball to higher rotational speeds [6]. [] Fig. 2. Left: Comparison of simulated and experimental results for power produced by one solenoid coil at different load resistances and rotational velocities [5] Right: Complete Moball system with internals exposed through zippered shell 2.2 Harvesting Experimental Setup We developed an apparatus to accurately test the energy harvesting capability of Moball’s solendoidal mechanism as seen in Fig. 3. The testing mechanism consists of a motor to accurately rotate the solenoidal framework, sensors to measure the operating conditions, and a custom circuit to measure energy production in real time. The axis of rotation was chosen to maximize power generation from 4 of the 6 solenoidal axes. The power from each coil is measured from an amplified voltage over a current sense resistor in series with the load resistance for each coil. A laser distance sensor mounted within the central box allows the position of the magnet in one of the tubes to be tracked during Moball’s rotation. The centrally located datalogger also employs an IMU and optical sensors tripped by index markers on the frame to infer its rotation rate and orientation during each test. [] Fig. 3. Moball with most battens, one tube pair, and zippered shell (seen bottom left) removed to allow mounting on support frame 2.3 Validation of Harvesting Potential The energy harvesting test apparatus was run over a range of rotation rates and load resistances. The complete results can be seen in Fig. 4. The solenoidal system produced a peak of 7.23 W at 20 RPM and [$$47{\Omega }$$] load resistance ([$$R\_L$$]). As predicted in simulation the generated power is limited at low resistances, despite the fact that the mechanical damping experienced by the magnet as it passes through each coil is highest at low load resistance. [] Fig. 4. Left: Complete results of measured energy harvesting potential of the entire Moball system Right: Applicable experimental results compared to simulations from Asama et al. [5] This is thought to occur as a result of Faraday’s law of induction for solenoids and power being dissipated over the coil resistance ([$$R\_C$$]) of [$$9.5{\Omega }$$]. The law states that the voltage induced across the coil is [$$V\_{coil} = \alpha (x) \dot{x}$$] where [$$\dot{x}$$] is the velocity of the magnet and [$$\alpha (x)$$] is the inductive coupling between the magnet and the coil which is a nonlinear function of the distance between them. The general form of the [$$\alpha $$] function can be seen in Fig. 7 in Sect. 3. By Lenz’ law this voltage potential opposes the motion of the magnet causing it to decelerate as described by Eq. (1a). At [$$R\_L = 5\Omega $$] the inductive time constant for the harvesting circuit is 3.5 ms, whereas the magnet spends 440 ms within the influence of the coil resulting in negligible current transients. This allows the power over the load resistance to be approximated by Eq. (2). [$$\begin{aligned} \ddot{x} = g sin(\theta ) - \frac{\alpha (x)I}{m} \qquad (a) \qquad P = \frac{1}{T} \int \_0^T R\_L I^2(t) dt \qquad (b) \end{aligned}$$] (1) [$$\begin{aligned} (R\_C + R\_L)I = V\_{coil} - L\_C(x)\frac{dI}{dt} \approx V\_{coil} \end{aligned}$$] (2) When the magnet enters the [$$\alpha $$] function there is a spike of electrical power of which [$$\frac{R\_C}{R\_C+R\_L}$$] is dissipated as heat within the coil, and this power rapidly decelerates the magnet such that the majority of its passage through the coil is at low velocity. As a result the net harvesting power described by Eq. (1b) is low for [$$R\_L\le {}R\_C$$] despite the magnet losing most of its velocity to electromagnetic damping. At high load resistances the opposite effect occurs, where the majority of mechanical energy converted to electrical goes to harvesting but the total conversion is low and the magnet passes through the coil very quickly. Previous experimental results followed the same trend as the model, though the magnitude of experimental power relative to simulated power for a single coil on one tube was shown to vary as a function of the rotation rate, as seen in Fig. 2 left. This trend was also evident on the complete system where peak experimental power was 84 %, 82 %, and 79 % of peak simulated power for 12 RPM, 16 RPM, and 20 RPM respectively. This is thought to be largely due to air damping of the magnet motion, as this was not accounted for in simulation. Future designs will employ vents spaced along each tube to allow air to mitigate this effect. Air damping may also serve to explain the peak of harvesting potential appearing to occur at higher load resistances than predicted by simulation, as it slows a magnets passage through each coil therefore increasing the duration over which power is generated. This may prove advantageous as there are other benefits to harvesting with a higher load resistance as explained in Sect. 2.5, and will serve to allow the energy harvesting potential of the system to be optimized more accurately for intended operating conditions. [] Fig. 5. Left: Normalized difference between power generated by inner and outer coils on each tube averaged across all load resistances Right: The speed at which the system becomes unstable and accelerates under a constant applied torque as a function of load resistance 2.4 Harvesting Potential of Inner and Outer Coils At low speeds the primary force at play pulling the magnets through the coils is gravity, however, as rotation rate increases the centrifugal force acting on the magnets begins to bias their motion towards moving away from the center of rotation. This has the effect of creating a disparity between the harvesting potential of the inner and outer coils of each tube as seen Fig. 5 left. The existing system employs identical circuitry on the inner and outer coils but the generated power may be increased by tuning the wire turns and load resistance of each to suit the magnet velocity at their respective locations under intended operating conditions. 2.5 Unstable Behaviour at Speed Limit As evidenced from simulation and Fig. 4, the power generated by the system generally increases with the ball’s rotation rate until the centripetal acceleration forces overcome the gravitational forces, limiting the magnet’s movement through the inner coils. Asama et al. predicted this would occur around 21 RPM for a load resistance of [$$25{\Omega }$$], the peak generation for the configuration of two coils per tube. Experimental results agreed well with this value, as seen Fig. 5 right. Simulations showed a rapid decline in peak power generation beyond this speed, however, this was difficult to replicate in testing due to a phenomena where the rotating mechanism became unstable after passing the point at which magnets cease to traverse the entire tube length. Higher speeds induce a positive feedback loop, where the rotating apparatus accelerates more rapidly as magnet travel is further reduced, diminishing the torque opposing forward motion. [] Fig. 6. Position of magnet in tube 3 during unstable behaviour of system at speed limit, which limits suitable operating conditions Figure 6 shows an example of this instability occurring in a test of [$$30{\Omega }$$] load resistance, where the blue line shows the position of the magnet in tube 3 relative to the outer end of the tube, and the orange curve shows the rotation rate of the ball. At 9 s the magnet does not reach the inner end of the tube and the RPM begins to increase rapidly with constant torque applied by the motor. Soon after the next rotation the magnet becomes locked at the outer tube end; the vibrations in the gyroscope signal from the impacts of the magnets at the tube ends stops as all other magnets also cease motion. At 15 s voltage supplied to the DC motor is reduced and the system transitions back to normal operation. This behaviour is evidence that Moball design parameters must be carefully selected to avoid this unstable regime where power cannot be generated. High speed Arctic winds can drive Moball at well above these speeds. Depending on the intended operating environment the ball diameter may need to be reduced in order to reduce wind force and therefore speed, as well as centripetal acceleration of the magnets (being a function of radius), to avoid entering this unstable magnet regime. 3 Sliding Masses as Actuators 3.1 Control of Barycentric Spherical Robots Spherical robots have long been the subject of academic study due in part to their nonholonomic constraints [7]. We model Moball’s motion with zero slip and zero rotation about the ground normal. Many driving mechanisms have been investigated for spherical robots, but perhaps the most common is that of barycenter offset [8]. Whether by use of an internal pendulum, sliding masses in tubes, or other means, the barycenter is displaced from the normal of the ground contact point, allowing gravity to roll the ball. Any feasible ball trajectory consisting of a 2D space of accelerations has a one to one mapping to the 2D space of net ball torques which will track it. However, as the barycenter can be positioned in three dimensions, center of mass location at any point on a given vertical line will produce the same net torque, allowing this extra dimension to be used for other design purposes such as minimizing energy expenditure. The configuration of magnet positions that produce a given barycenter location is also not unique; optimizing the mapping of desired ball path to magnet trajectories is the subject of current research beyond the scope of this paper. The use of radially aligned sliding masses has previously been proposed as a method of barycentric control in the Spherobot [9], and August [10]. Both of these employed four masses mounted on equispaced radially aligned leadscrews driven by stepper motors, which places limits on actuation speed. By contrast, magnets driven by solenoid coils are able to traverse their entire range of motion in fractions of a second, at the expense of being more difficult to achieve precise positioning. While the centers of mass achievable by four or six equispaced radially mobile masses is similar, Moball’s configuration has the added benefit that when moving in a straight line under external force such as wind, two of the six coils can be actuated in order to balance and bias the direction of motion, while the remaining four are used to harvest energy. 3.2 Tracking Representative Magnet Trajectories In order to gain an appreciation of the type of magnet trajectories that may be necessary to track arbitrary paths, numerical optimization was performed on the space of magnet positions needed to achieve simple motion primitives such as moving in a straight line or following an arc of constant radius. To reduce the space of magnet positions to unique trajectories, the system center of mass was assumed to remain in a plane parallel to the ground and passing through the center of rotation, and the instantaneous velocity of each magnet minimized. These simulations showed these magnet trajectories to be primarily comprised of sinusoids of varying frequency. It was therefore of interest to investigate the solenoid actuators’ ability to track sinusoidal trajectories in the reduced system of a single tube, and understand the accuracy and speed of positioning which could be achieved. [] Fig. 7. Experimental setup for discretized magnet trajectory control, with rig mounted on DC motor to allow simulation of ball rotation. Superimposed alpha function describes control authority in Newtons of force applied to magnet by positive unit current through coil as a function of magnet center to coil center distance. Due to the limited region of control authority afforded by each coil, the two coil configuration used in energy harvesting results in a control deadzone at the tube center. For this reason magnet control was implemented with three coils as seen in the experimental setup shown in Fig. 7. The solenoids are energized with a constant voltage from a 6 cell Lithium Polymer battery, controlled through a pair of H-bridge motor drivers using the duty cycle of a PWM enable signal to regulate coil current. Magnet position is again measured using a laser distance sensor, and the apparatus is rotated using the same DC motor as for energy harvesting. Preliminary tests were conducted with the tube horizontal and stationary to observe the capabilities of the actuators before adding the effects of gravity and centripetal acceleration. Two control methods were tested for magnet positioning in order to evaluate the performance of the coils as actuators. The first employed ’bang-bang’ control where coils would be energized to push the magnet towards a desired position with a fixed percentage of maximum control effort in the form of a PWM signal. This approach allowed the tracking of very aggressive trajectories such as high frequency sinusoids at the cost of using more power. [$$\begin{aligned} \ddot{x} = g sin(\theta ) - (x - x\_{center})\dot{\theta }^2 + \frac{F\_{coil}}{m} \end{aligned}$$] (3) The second method consisted of a proportional controller with feedback linearization. The dynamics of the magnet in a non-inertial frame attached to the tube are a function of centripetal acceleration, tube orientation [$$\theta $$], gravity, and the force exerted by the coils, as described in Eq. (3) where [$$x$$] is position measured from one end of the tube. The electromagnetic force from each coil is [$$F\_{coil}=\alpha (x) I$$] where [$$I$$] is the coil current and [$$\alpha (x)$$] is an inductive coupling factor which is a function of magnet to coil center distance. To achieve feedback linearization, a model of the [$$\alpha $$] function developed in [5] was applied in MATLAB to the particular coil configuration used in testing, and an inverse function calculated as [$$\alpha \_{max}/\alpha (x)$$] which when multiplied with the output of the proportional controller would approximate a linear control output. The control authority of the three coils and the alpha function can be seen in Fig. 9 left. [] Fig. 8. Left: Measured average tracking error for different controllers across a range of sinusoidal trajectories over 60 cm travel Right: Normalized control effort for tested controllers representing the duty cycle of PWM signal driving the control circuitry The results of the first set of tests for tracking sinusoidal trajectories of various frequencies can be seen in Fig. 8 left. For the given testbed parameters, it is clear that tracking performance diminishes significantly at a period below 3 s or 20 RPM, where peak acceleration is above 2.6 [$$m/s^2$$]. Given the increase in error, the energy expended by the proportional controller attempting to track each trajectory also increases dramatically at periods below 3 s as seen in Fig. 8 right. As the period of the desired magnet trajectory matches that for rotation of the complete Moball system, this suggests that system speeds above 20 RPM would be poorly controlled unless circuit parameters, in particular the applied voltage or magnet travel, were modified. 3.3 Demonstration of Barycentric Control In order to demonstrate the solenoid actuator’s ability to apply barycentric control to the complete system, the single tube control rig was rotated at a constant rate to simulate Moball rolling along a flat surface at a constant speed. Despite the numerically optimized magnet trajectories being smooth sinusoids it was found that the maximum current which could be provided by the testbed drivers was insufficient to overcome gravity at the lowest point of the [$$\alpha $$] function, and so a step function was employed to place the magnet at the outer tube edge on a downward stroke, the tube center when vertical, and the inner tube edge during an upward stroke. This allowed transitions between coils and regions of control authority to occur at angles where gravity could be overcome, while maximizing the net forward torque applied to Moball to assist its motion. [] Fig. 9. Left: Absolute alpha function showing actuator authority for the given coil configuration, with inverse function used for feedback linearization Right: Results of test for ability of single magnet to contribute to forward motion of Moball Figure 9 right shows that the net forward torque that can be applied by each magnet reduces significantly with increasing Moball speed. As with tracking sinusoidal trajectories, the time spent at the end of the tube during each rotation, and therefore net torque produced by the magnet’s mass effect, could be increased by raising the voltage applied to the coils. However, this would increase the power consumption necessary to maintain forward motion. Control effort also increased with RPM as seen in Fig. 9 right due in part to more frequent transfers between regions of peak control authority. Coil current measurements proved inaccurate during control, but using the simplification of [$$P = V\_{max}^2 / R$$] suggests maximum control effort requires an upper bound of 60 Watts per tube. In practice this would be slightly below the estimated value as inductance limits the power dissipated through the coil when switching drivers cause the current to change rapidly. With the simple proportional controller employed, four tubes operating with a phase separation of [$$\pi /2$$] are able to produce a maximum torque of 3.72 Nm at 5 RPM, which requires a sustained 126 W. At 30 RPM four tubes can produce a maximum of 0.62 Nm requiring 189 W. The moment of inertia of the 35 kg Moball along each primary axis is approximately 11.75 kg[$${\cdot }$$]m[$$^2$$]. 4 Conclusion and Future Work The energy harvesting potential of the complete Moball system has been validated experimentally and shown to produce 7.23 W. Limitations on effective operating conditions have been identified and will be used to further optimize the harvesting system, which may include restructuring the tubes to form a 6-strut tensegrity icosahedron as used in other extraplanetary rovers such as the Ames Research Center’s SUPERball [11]. Such a configuration would increase the speeds at which Moball could operate before magnets become locked at the outer edge by reducing the distance of tube ends from their orthogonal rotation axes therefore reducing peak centripetal acceleration. This would also extend magnet travel and increase the range of centers of mass achievable for control. Barycentric control using a single mass element has been demonstrated, and the power requirements for sustained controlled motion of the complete system using a proportional magnet controller estimated to be on the order of 130 W. While sustained maximum control effort is not the intended use for the system, being more suited to biasing motion than creating it, current work seeks to reduce this figure by employing impulse controllers to only apply power when a magnet is in the peak of the [$$\alpha $$] function, increasing actuator energy efficiency. References 1. Davoodi, F., Burdick, J.W., Rais-Zadeh, M.: Moball network: a self-powered intelligent network of controllable spherical mobile sensors to explore solar planets and moons. In: AIAA SPACE 2014 Conference and Exposition, pp. 1–9 (2014) 2. Dry, S.J., Yau, M.K.: Large-scale mass balance effects of blowing snow and surface sublimation. J. Geophys. Res. Atmos. 107(D23), ACL 8-1–ACL 8-17 (2002)CrossRef 3. Nghiem, S.V., Clemete-Colon, P.: Arctic sea ice mapping with satellite radars. In: Radar Conference, RADAR 2008, pp. 1–3. IEEE (2008) 4. Southard, L., Hoeg, T.M., Palmer, D.W., Antol, J., Kolacinski, R.M., Quinn, R.D.: Exploring mars using a group of tumbleweed rovers. In: 2007 IEEE International Conference on Robotics and Automation, pp. 775–780 (2007) 5. Asama, J., Burkhardt, M.R., Davoodi, F., Burdick, J.W.: Design investigation of a coreless tubular linear generator for a Moball: a spherical exploration robot with wind-energy harvesting capability. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 244–251 (2015) 6. Burkhardt, M.R., Davoodi, F., Burdick, J.W., Davoudi, F.: Energy harvesting analysis for moball, a self-propelled mobile sensor platform capable of long duration operation in harsh terrains. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 2665–2672 (2014) 7. Johnson, B.D.: The nonholonomy of the rolling sphere. Am. Math. Monthly 114(6), 500–508 (2007)MathSciNetMATH 8. Chase, R., Pandya, A.: A review of active mechanical driving principles of spherical robots. Robotics 1(1), 3 (2012)CrossRef 9. Mukherjee, R., Minor, M.A., Pukrushpan, J.T.: Simple motion planning strategies for spherobot: a spherical mobile robot. In: Proceedings of the 38th IEEE Conference on Decision and Control, 1999, vol. 3, pp. 2132–2137 (1999) 10. Mojabi, P.: Introducing August: a novel strategy for an omnidirectional spherical rolling robot. In: IEEE International Conference on Robotics and Automation, 2002 Proceedings, ICRA 2002, vol. 4, pp. 3527–3533 (2002) 11. Bruce, J., Caluwaerts, K., Iscen, A., Sabelhaus, A.P., SunSpiral, V.: Design and evolution of a modular tensegrity robot platform. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3483–3489 (2014) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_8 Control of Pneumatic Actuators with Long Transmission Lines for Rehabilitation in MRI Melih Turkseven¹   and Jun Ueda¹   (1) Georgia Institute of Technology, 771 Ferst Drive, Atlanta, 30332, USA     Melih Turkseven (Corresponding author) Email: mturkseven3@gatech.edu URL: http://www.biorobotics.gatech.edu/wp/   Jun Ueda Email: jun.ueda@me.gatech.edu URL: http://www.biorobotics.gatech.edu/wp/ Abstract This study presents methods for understanding, modeling and control of tele-operated pneumatic actuators for rehabilitation in Magnetic Resonance Imaging (MRI). Pneumatic actuators have excellent MRI-compatibility as opposed to conventional electro-mechanical systems; however, the actuator and the system drivers cannot be co-located due to the MRI-compatibility requirements. The actuators are driven via long transmission lines, which affect the system dynamics significantly. Methods provided in this work produced accurate pressure estimation and control by accounting for the pressure dynamics in the lines, which has been neglected by previous work in this area. The effectiveness of the presented modeling and control methods were demonstrated on tele-operation test setups. This study also includes the design of necessary system components for the developed algorithms. An MRI-compatible optical sensor was developed for force feedback and its design was analyzed for high precision. Keywords Tele-operationPneumaticsControlModelingTransmission lines M. Turkseven–This work is supported by the Center of Compact and Efficient Fluid Power (CCEFP). 1 Introduction Magnetic Resonance Imaging is a flagship diagnosis technology for brain related studies. A recent trend is to develop MRI-compatible systems to implement rehabilitation procedures within MR-scanners to understand the underlying mechanisms in brain functioning. Conventional actuation methods, such as electric motors, cannot function inside MRI rooms due to the presence of intense magnetic field. Among MRI-safe actuation methods, such as ultrasonic motors, systems with fluid transmission and pneumatic actuators, pneumatic systems have generally been preferred for their efficiency and ease of use. For example, ultrasonic motors are not inherently back-drivable and they require a mechanical extension for compliance [6]. Fluid transmission is considerably harder to setup and maintain compared to the pneumatic actuation. The friction in the transmission lines and the inertia of the fluid media limit the bandwidth of such systems to below 1 Hz [11]. Pneumatic systems, on the other hand, has better friction characteristics and present a larger bandwidth. Their use is especially convenient for health-care institutions, since an auxiliary air source is typically present in such facilities. However, the pneumatic actuators and servo valves cannot collocate since general valves use solenoids with ferromagnetic materials. As a result, long transmission lines should be used between the actuators and the valves. Pneumatic transmission lines can have a significant influence on the pressure dynamics of the system. Since the flowing media, air, is compressible; a phase delay in pressure emerges between the two ends of a slender transmission line. In addition, the friction on the tube walls induce a pressure gradient along the transmission line. Resultantly, the dynamic difference between the pressures at the terminals of a long connecting tube either requires an additional pressure sensor at the end of the line, or an estimation method that can characterize the line dynamics. [] Fig. 1. The concept of rehabilitation via pneumatically driven systems in MRI (a) Schematics that describe the application (b) 1-DOF, pneumatically driven, MRI-compatible prototype. Standard pneumatic modeling, which assumes the line as a dead volume, or linearizing the flow equations have been satisfactory for low-bandwidth ([$$\le $$] 1 Hz) position control of tele-operated systems [1, 10]. However, more accurate modeling tools are needed for relatively faster operations such as haptic interfaces, rehabilitation robots and exoskeletons, in which force or impedance control is objected [11]. Existing works that involve high fidelity time-domain models demonstrated the use of such models with open-loop controllers [5]. Given the complexity of compressible flow dynamics, pneumatic line models have been neglected in the field of robotics for simplicity and faster computation. A relatively standardized method is to characterize the line as a dead volume and simply combine it to the actuator [3]. This generic approach provides a stable model for non-linear controllers, yet it cannot reflect the pressure inhomogeneity through the lines [9]. Even when accurate actuator pressure information is available, the standard model does not yield a good accuracy on tele-operated systems at high frequency ([$$\ge $$] 0.5 Hz) [9]. Relatively more advanced methods are required to achieve a good accuracy in both the actuator pressure estimation and the mass flow rate through the lines. This work is focused to develop a non-linear line model that can be utilized with closed-loop force controllers. The developed approaches were tested up to 2 Hz force control frequency, which is significantly higher than existing demonstrations on the systems that involve 5–10 m long transmission. This study aims to develop tools and methods that will enable the impedance control of a pneumatic system with long transmission lines for rehabilitation in MRI. Figure 1 presents the conceptual idea of how the tele-operated system should operate. The force sensing will be accomplished by developing an MRI compatible optical force sensor. The problem of obtaining pressure states in the actuator chambers was solved by designing an asymptotically-stable observer that uses force and position feedback. An advanced system model that accounts for the dynamics of the transmission lines was implemented to improve the bandwidth of the system. 2 Technical Approach Force feedback is a crucial factor for obtaining quantitative data on the performance of the patient and establishing effective control algorithms [7]. Hence, an optical force sensor with a compliant, plastic body was designed in this study. Figure 1(b) shows the deforming structure of the force sensor. The structure was designed as a displacement amplifying compliant mechanism (DACM) that is coupled to a photovoltaic circuit to transduce the deformation into electronic signal. The topology of the compliant mechanism was further analyzed for high precision [8]. [] Fig. 2. (a) The standard pneumatic system model that assumes a homogeneous pressure distribution along through the lines (b) The schematic representation of the proposed model The force feedback enabled by the developed force sensor was utilized in a model-based pressure observer. Open-loop pressure estimations of the standard system modeling, which assumes homogeneous pressure distribution along the lines, are corrected by the use of force feedback in a closed-loop manner. The developed pressure observer achieves asymptotical stability and improved the error convergence rate significantly [9]. While this closed-loop observation method also clearly improved the control accuracy and the contact stability between the piston and the environment; the improvement was limited in magnitude. Those experiments indicated the need for a more sophisticated control architecture. To address the need for a line model compatible to online control algorithms, the transmission line is discretized in its longitudinal axis and the governing equations are implemented on each of the elements with averaged parameter values [4]. In this research, the line is assumed to be comprised of two control volumes with one associated mass flow rate in between. Figure 2 shows the difference between the developed method and the standard system modeling approach. The flow rate dynamics is defined between two local pressures: the valve pressure, [$$P\_v$$], and the actuator pressure, [$$P\_c$$], as follows: [$$\begin{aligned} x = \begin{bmatrix} P\_{c\_1}V\_{c\_1} \\[0.5em] \dot{m}\_{c\_1} \\[0.5em] P\_{c\_2}V\_{c\_2} \\[0.5em] \dot{m}\_{c\_2} \end{bmatrix}\text {,} \quad \dot{x}=\begin{bmatrix} \dot{m}\_{c\_1}RT \\[0.5em] (P\_{v\_1}-P\_{c\_1})\frac{A}{L}-f\_{(\dot{m}\_{c\_1},P\_{v\_1},P\_{c\_1})} \\[0.5em] \dot{m}\_{c\_2}RT \\[0.5em] (P\_{v\_2}-P\_{c\_2})\frac{A}{L}-f\_{(\dot{m}\_{c\_2},P\_{v\_2},P\_{c\_2})} \end{bmatrix} \end{aligned}$$] (1) where L is the line length and A is the line cross-section area that defines the volume of the line: [$$V\_L=AL$$]. Each of the two sides of the pneumatic system, shown in Fig. 2, is denoted by [$$i=1,2$$]. The volume of the actuator chamber, [$$V\_{c\_i}$$] is added to half of the line volume, [$$V\_L$$], to define the volume associated with [$$P\_{c\_i}$$]. The rate of mass flow through the valve, [$$\dot{m}\_{v\_i}$$], is a function of the pressures across the valve and the area of the valve opening. R is the thermodynamic constant, T is the room temperature. This method assumes an average friction, f, for the whole line [4]. [] Fig. 3. The block diagram of the cascaded control algorithm Note that the position of the piston; hence, the chamber volumes are calculated and substituted at each time step to obtain the pressure values. The valve pressure is also considered to be available by the use of pressure sensors at the valve ports. The pressure in the actuator chamber is then obtained assuming an isothermal chamber model. The relation between the two nodes of the discretized line model are utilized in a cascaded controller, shown in Fig. 3. Based on the desired rate of mass transfer and the control error, [$$e=F\_{e\_d}-F$$], a new set of reference pressures for the valve is assigned. The outer layer in the control architecture, shown in Fig. 3, makes this assignment. The inner control loop tracks the updated reference, denoted by [$$F\_d$$] and formulated as a virtual force: [$$\begin{aligned} F\_{d} = F\_{v\_{1\_d}}A\_1 + F\_{v\_{2\_d}}A\_2 \end{aligned}$$] (2) where [$$F\_{v\_{1\_d}}$$] and [$$F\_{v\_{2\_d}}$$] are the updated reference pressures and [$$A\_1$$], [$$A\_2$$] are the actuator piston areas. The feedback for the inner control loop comes from the pressure sensors located at the exit of the valve ports. That inner loop is designed to converge the error:[$$e\_C$$] = [$$F\_d-F\_v$$] in the valve force to zero. For both control loops, a model based PI control law was derived and pursued. [] Fig. 4. Photo of the experimental setup for tele-operated pneumatic actuation utilized in this study 3 Experiments A double acting cylinder (Airpel E2.0DU) was connected to a four-way spool valve (Enfield Tech LH-05) via long transmission lines of 4 mm inside diameter. Pressure was measured by analog pressure sensors (SSI Technologies P51 series) at the four locations shown in Fig. 4: at the inlet of each actuator chamber and at each exit port of the spool valve. The external force applied to the actuator was measured by a load cell (Omegadyn LC703-50) attached at the tip of the piston rod. The piston was connected to a spring system that provided an external force proportional to the piston displacement, as shown in Fig. 4. The piston position was measured by using a Honeywell linear potentiometer (model F38000106). The velocity and acceleration of the piston are obtained by numerical differentiation with a first-order filter at 50 Hz. The system was controlled by a platform of Intel i7 @2.80 GHz processor with 8.00 GB RAM, where the control algorithm were implemented using Labview. The National Instruments A/D board (NI DAQ-6221) was used for data acquisition. [] Fig. 5. Output force control performance of the proposed method at the setup with 10 meters long transmission. (a) The desired force and output force at 2 Hz sinusoidal. (b) The valve and actuator pressures in the rear chamber at the 2 Hz force control experiment. 3.1 Procedure and Results The performance of described method was tested on four different force reference types: step, 0.5 Hz, 1 Hz, and 2 Hz signals; on four different setups with the following transmission line lengths: 2, 5, 7.25, and 10 m. The same tests were also conducted using a standard sliding-mode based force control for reference. The system model utilized for the sliding-mode based controller is shown in Fig. 1. A more detailed description of the standard application of sliding-mode control on pneumatic systems has been outlined in an earlier work [9]. A sample performance of the proposed modeling and control algorithm is presented in Fig. 5, which shows the output force variation and one of the measured actuator pressures in a 2-Hz force tracking test. The setup that corresponds to Fig. 5 involved 10 m long transmission lines. [] Fig. 6. The comparison of the developed method and the standard sliding-mode controller. (a) The accuracy in the force control, obtained on setups with different transmission lengths at 2 Hz force reference. (b) The mean squared error (MSE) in actuator chamber pressure estimation. The standard pneumatic modeling, is compared to the proposed line model on various transmission lengths with 2 Hz reference signal. The difference in the performance between the proposed control method and the sliding-mode control based on the standard model is given in Fig. 6. As presented in Fig. 6(a), the developed method improves the force control accuracy, especially as the transmission length gets higher. Similarly, the pressure estimation accuracy of the developed line model is significantly better than the standard approach. Note that, the diagram in Fig. 6(b) presents the error squares in the logarithmic scale. 4 Discussion and Insights This study adapted an accurate line modeling technique to the model-based control of tele-operated pneumatic systems. Standard pneumatic system models with widely applied sliding-model controllers neglect the line dynamics which has been previously shown to be very significant on tele-operation [9]. In this study, the pressure dynamics through long transmission lines (2–10 m) is characterized via a low-order nonlinear model. The force tracking experiments within the range of 0.5 Hz to 2 Hz validated the accuracy of the approach, as shown in Fig. 5. In particular, 2 Hz force control on a system that involves 10 m long pneumatic transmission is beyond the capacity of the standard methods. Apart from the modeling accuracy, the efficiency of the control approach is also demonstrated experimentally. The described algorithm that updates the reference for the driving unit, valve, compensates for the time-delay and flow attenuation in the lines. The phase lag between the updated and original reference signals shown in Fig. 5(a) is an emergent result of the described algorithm. The mass transport delay through the line and the selected gain magnitudes in this study, however, resulted in a chatter between the layers of the cascaded control algorithm. This study shows that the described control algorithm, as well as the simplified non-linear system model, was shown to be efficient for the control of pneumatic actuators with long transmission lines. Such systems are promising for robotic applications performed in MRI [2]. The optimal use of the proposed method and its integration to therapeutic procedures will be studied as a future work. References 1. Elmasry, A., Liermann, M.: Passive pneumatic teleoperation system. In: ASME/BATH 2013 Symposium on Fluid Power and Motion Control, pp. V001T01A040-V001T01A040. American Society of Mechanical Engineers (2013) 2. Gassert, R., Burdet, E., Chinzei, K.: Opportunities and challenges in MR-compatible robotics. IEEE Eng. Med. Biol. Mag. 27(3), 15–22 (2008)CrossRef 3. Gulati, N., Barth, E.: A globally stable, load-independent pressure observer for the servo control of pneumatic actuators. IEEE/ASME Trans. Mechatron. 14(3), 295–306 (2009)CrossRef 4. Krichel, S.V., Sawodny, O.: Non-linear friction modelling and simulation of long pneumatic transmission lines. Math. Comput. Model. Dyn. Syst. 20(1), 23–44 (2014)CrossRefMATH 5. Li, J., Kawashima, K., Fujita, T., Kagawa, T.: Control design of a pneumatic cylinder with distributed model of pipelines. Precis. Eng. 37(4), 880–887 (2013)CrossRef 6. Sergi, F., Chawda, V., OMalley, M.K.: Interaction control of a non-backdriveable MR-compatible actuator through series elasticity. In: ASME 2013 Dynamic Systems and Control Conference, pp. V002T27A002-V002T27A002. American Society of Mechanical Engineers (2013) 7. Turkseven, M., Ueda, J.: Design of an MRI compatible haptic interface. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2139–2144. IEEE (2011) 8. Turkseven, M., Ueda, J.: Analysis of an MRI compatible force sensor for sensitivity and precision. IEEE Sens. J. 13(2), 476–486 (2013)CrossRef 9. Turkseven, M., Ueda, J.: An asymptotically stable pressure observer based on load and displacement sensing for pneumatic actuators with long transmission lines. IEEE Trans. Mechatron. (2015) 10. Yang, B., Tan, U.-X., McMillan, A., Gullapalli, R., Desai, J.: Design and control of a 1-dof MRI-compatible pneumatically actuated robot with long transmission lines. IEEE/ASME Trans. Mechatron. 16(6), 1040–1048 (2011)CrossRef 11. Yu, N., Hollnagel, C., Blickenstorfer, A., Kollias, S.S., Riener, R.: Comparison of MRI-compatible mechatronic systems with hydrodynamic and pneumatic actuation. IEEE/ASME Trans. Mechatron. 13(3), 268–277 (2008)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_9 Terrain-Dependant Control of Hexapod Robots Using Vision Timon Homberger^(1, 2), Marko Bjelonic^(1, 3), Navinda Kottege¹   and Paulo V. K. Borges¹ (1) Autonomous Systems Lab, CSIRO, Brisbane, Queensland, 4069, Australia (2) Department of Mechanical and Process Engineering, ETH Zurich, 8092 Zurich, Switzerland (3) Faculty of Mechanical Engineering, Technische Universität Darmstadt, 64287 Darmstadt, Germany     Navinda Kottege Email: navinda.kottege@csiro.au Abstract The ability to traverse uneven terrain is one of the key advantages of legged robots. However, their effectiveness relies on selecting appropriate gait parameters, such as stride height and leg stiffness. The optimal parameters highly depend on the characteristics of the terrain. This work presents a novel stereo vision based terrain sensing method for a hexapod robot with 30 degrees of freedom. The terrain in front of the robot is analyzed by extracting a set of features which enable the system to characterize a large number of terrain types. Gait parameters and leg stiffness for impedance control are adapted based on this terrain characterization. Experiments show that adaptive impedance control leads to efficient locomotion in terms of energy consumption, mission success and body stability. Keywords Legged robotsAdaptive controlStereo visionTerrain perceptionRough terrain traversal T. Homberger and M. Bjelonic—Contributed equally to this work. 1 Introduction Extreme terrain limits the locomotion of mobile robots. Wheeled robots, for example, require an appropriate surface structure for safe maneuvering. Legged robots, on the other hand, are able to adapt their gaits to overcome challenging terrain [1]. One reason why legged robots have gained popularity is because large parts of the Earth’s surface are still inaccessible to wheeled machines [2]. Nevertheless, wheeled machines outperform legged robots in many instances due to the complexity of walking machines [3]. It still remains an open challenge to further improve performance of legged machines in the field, especially with a focus on using terrain information to adapt locomotion parameters. A large number of methods for terrain perception have been discussed in the literature. Such perception is based on exteroceptive sensing [4], proprioceptive sensing [5, 6], or a combination of both [7, 8]. The literature often discriminates between terrain classification and terrain characterization [7], approaching them as two different problems. Terrain classification aims to associate a surface area with a category in a set of predefined terrain types [9, 10] while terrain characterization tends to assess terrain properties with numeric values, without considering semantics. Aiming for smooth and efficient maneuvering on variable cluttered ground, this work presents a terrain perception system that characterizes the terrain and adapts the virtual stiffness of an impedance controller along with an assessment of the use of step height characterization for stride height adaptation. For this purpose, a highly flexible hexapod robot with 30 degrees-of-freedom (DoF), Weaver [11] is equipped with a vision-based motion adaptation system. The robot and the stereo vision setup are shown in Fig. 1. The visual perception module employs a novel method for feature extraction, the “Even run length”, as well as other terrain feature evaluation methods for accurate characterization of a large number of terrain types. [] Fig. 1. Hexapod robot Weaver with its stereo camera system on rough terrain. With its five DoF per leg, Weaver is able to control the orientation and position of the foot tips to maintain ground contact by sensing the force at each foot tip. The legs are controlled analogous to a virtual mass-spring-damper system implemented with a Cartesian space impedance controller. Low virtual stiffness of the legs allows traversing very uneven and cluttered terrain while the robot would get stuck if the legs are very stiff. It was also found that with low virtual stiffness, efficiency decreases for motion on flat terrain. Therefore, the adaptive impedance controller introduced in this paper extends the control strategy described in [11]. 2 Terrain-Dependant Control A stereo camera pair is rigidly mounted on the robot such that it captures the terrain immediately in front of the robot. Intrinsic and extrinsic calibration of the stereo pair is realized using the OpenCV stereo calibration package with a checkerboard of known dimensions. The generation of a disparity map (Fig. 2) provides depth information of the scene. It is stored as a point cloud in 3D space. This point cloud is downscaled using a voxel grid filter [12] for more efficient spatial transformation. Using an Inertial Measurement Unit (IMU) the data is transformed into a coordinate system which is aligned with the gravity vector. This allows terrain intrinsic feature extraction [12, 13]. A Digital Elevation Model (DEM) is generated by discretizing the horizontal plane into quadratic cells [14, 15]. The DEM point cloud consists of the maximum terrain elevation in each cell (Fig. 4). [] Fig. 2. From left to right: Left camera rectified image, right camera rectified image, disparity map. A region of interest (RoI) of the DEM in front of the robot (covering an area equivalent to that of the robot) is defined as the relevant area of interest for terrain characterization, considering that the hexapod is moving forward. A plane is fit into the RoI using a least squares method. From the fitted plane and the DEM data inside the RoI a set of terrain features [$$f\_i$$] is extracted. This set is designed to yield distinct characterization of a large number of surface types (Fig. 6). The diagram shown in Fig. 3 presents the basic pipeline. The features used for ground characterization are detailed in the following: [] Fig. 3. Overview on exteroceptive terrain perception and adaptive control. The stride height adaptation is part of future work. 1. (1) Center line average [$$f\_1$$]: Center line average is used to characterize the spread of the elevation data [13]. [$$\begin{aligned} f\_1=\frac{1}{n\cdot m}\cdot \sum \_{0}^{n}\sum \_{0}^{m}\left| z\_{data}-z\_{plane} \right| \end{aligned}$$] (1) Here, [$$z\_{data}$$] denotes the elevation value of a DEM point. [$$z\_{plane}$$] is the elevation of the corresponding point of the fitted plane (i.e. [$$x\_{data} = x\_{plane}$$] and [$$y\_{data}=y\_{plane}$$]). n and m are the dimensions of the considered area, expressed as the number of grid cells of the DEM.   2. (2) Slope [$$f\_2$$] and [$$f\_3$$]: The slope of the fitted plane is the angular difference between the horizontal plane and the plane that was fitted into the elevation data. Different angles in lengthwise and crosswise directions of the robot’s body are the inclination angles of the terrain.   3. (3) Average local variance [$$f\_4$$]: Locally distributed variance is derived via a “local descriptor” method. DEM cells in a limited neighborhood to a local descriptor point are considered for variance calculation of the local descriptor cell. Local descriptor method is similarly used in [12, 15]. The spatial average of the local variance (Fig. 4) is a measure of the size of ground clutter.   4. (4) Line of sight shadows [$$f\_5$$]: There are areas inside the RoI which cannot be perceived by the cameras (Fig. 5). These geometrical perception limitations are referred to as “shadows”. These occur if an object/clutter inhibits the cameras lines of sight of reaching certain areas [16]. The system classifies these unperceived areas as uncertainties. The more shadows occur the more conservative the choice of motion parameters, e.g. low leg stiffness.   5. (5) Maximum step height [$$f\_6$$]: For sensible adaptation of the stride height, the maximum local change in elevation occurring inside the RoI is determined. A “local descriptor” method is used for calculation. The highest elevation difference detected in a bounded neighbourhood of the local descriptor is the local step height. Maximum step height is the highest step height inside the RoI. A similar approach for maximum step height calculation is used in [15].   6. (6) Even run length [$$f\_7$$]: This is a novel method to quantify the amount of continuous, nearly-horizontal surfaces (Fig. 4). It is adapted from grey scale image analysis methods [17]. Sequential cells (lengthwise direction) of the DEM are considered to be part of a run if they meet the following two requirements: (1) The elevation of all cells inside a run is within a specified range. (2) The run contains a minimum number of cells. Summing up the total number of cells that are part of a run yields a measure for the tendency of surface patches to be horizontal.   [] Fig. 4. DEM Point Clouds in three different color schemes, from left to right: Terrain elevation, local variance with red [$$\rightarrow $$] high value, Even run length with white [$$\rightarrow $$] cells within the RoI that appertain to a run. [] Fig. 5. Shadows caused by line of sight limitations [16]. From subsets of the extracted features [$$f\_i$$], descriptive ground characterization parameters roughness [$$r\_{a}$$] and step height [$$h\_a$$] are derived (Fig. 3). The roughness and step height are derived by [$$\begin{aligned} r\_a = \frac{1}{a\_{norm,1}} \sum \_{i=1}^{5} a\_i \cdot f\_i \end{aligned}$$] (2) [$$\begin{aligned} h\_a = (a\_6 \cdot f\_6 + a\_7 \cdot f\_4 \cdot f\_7)/(a\_{norm,2}) \end{aligned}$$] (3) The weighting parameters [$$a\_i$$] are set empirically, i.e. by defining suitable [$$r\_a$$] and [$$h\_a$$] for a number of exemplary surface types. The parameters [$$r\_a$$] and [$$h\_a$$] are dimensionless values between zero and one. These parameters are used for adaptive impedance control and future stride height adaptation respectively (Fig. 3). The formula for step height characterization (3) includes the term: [$$a\_7 \cdot f\_4 \cdot f\_7$$]. This correction term quantifies the occurrence of nearly planar surfaces which are bordered by slopes. It was found that this kind of terrain requires high stride height to be smoothly traversed. This novel method is designed to enable the robot to traverse terrain with sharp drop-offs/inclines (e.g. curbs, steps) by adding an extra margin to the maximum step height [$$f\_6$$]. The system uses terrain characterization rather than classification to achieve this task [7]. Characterization results are presented in Sect. 4. Adaptive impedance control sets the virtual stiffness [$$c\_{virt}$$] depending on the vision based roughness estimation [$$r\_a$$]. A suitable correlation between virtual stiffness and roughness gives a third order polynomial: [$$\begin{aligned} c\_{virt} = b\_0 + b\_1 \cdot r\_a + b\_2 \cdot r\_a^2 + b\_3 \cdot r\_a^3 \end{aligned}$$] (4) It is derived by choosing desired (optimal) stiffness values for a variety of terrain types. A set of roughness/stiffness data points are chosen along with the corresponding roughness estimates and a curve is fit over these points using minimizing least squares error yielding (4). As the robot perceives the roughness [$$r\_a$$] and step height [$$h\_a$$] characterization at a given distance in front of the platform, information on ego-motion is needed. An external position estimation system described in Sect. 3 is used to provide the robot with its relative position, which is used to derive the required ego-motion information. The visually perceived terrain characterization is associated with a point in horizontal 2D space. This point lies centrally inside the RoI of the DEM. In each time frame the area which contains the robot’s vertical projection to the ground is searched for the point with the highest corresponding roughness and step height value. This ensures sufficiently low stiffness (and sufficiently high stride height respectively) to overcome rough terrain. 3 Experiments For comparative evaluation of performance, a multi-terrain testbed of 8.4 m length was used for experimentation. It consists of patches of six different terrain types (Fig. 6). Terrain types include flat ground (A), planar slope (B), wooden cuboid blocks (C) and cluttered terrain consisting of crumbled concrete, sand, pebbles and variably sized stones (D-F). The experiments consisted of the hexapod robot repeatedly traversing this testbed with high level navigation (velocity commands) provided by a human operator via joystick. A sample video of the operation is available online¹. [] Fig. 6. Multi-terrain testbed: 2.93 m of flat ground (segment A) followed by 1.2 m of inclined planar segment (10[$$^{\circ }$$]) (segment B) are traversed before entering segment C. Maximum height difference: 113% (segment C), 28% (segment D), 11% (segment E) and 72% (segment F) of Weavers body height. [] Fig. 7. The CoT of the adaptive (black line) and non-adaptive controller (red line) shows the mean of eight runs on the testbed. One standard deviation of the adaptive controller is shaded in grey. In addition, the virtual stiffness of the adaptive and non-adaptive controller is shown. For evaluation of the motion efficiency, the cost of transport CoT is defined as [$$\begin{aligned} CoT = P/(mgv) \end{aligned}$$] (5) where P is the power consumption, m is the mass of the robot, g is the gravitational acceleration and v is the velocity of the robot. The power consumption [$$P=UI$$] was measured at 20 Hz by an Arduino based sensor system. This sensor monitors the voltage U and current I of the power supply. Weaver’s mass is 9.3 kg. A robotic total station (Leica TS12) was used during testing to track the position of the robot at 4 Hz. The total station tracks a reflector prism attached to the robot and provides its 3D position. This ego-motion measurement serves as input to the terrain characterization as described in the end of Sect. 2. In addition, the velocity of the robot in (5) is approximated by finite differences of the position for evaluation of the robot’s cost of transport (CoT). Two sets of eight runs each have been conducted to examine the CoT. Adaptive impedance control was used during the first set and constant stiffness was applied in the second set. The range of the virtual stiffness of the adaptive controller is set between 1060 Nm[$$^{-1}$$] and 70340 Nm[$$^{-1}$$]. The constant stiffness of the non-adaptive impedance controller is the minimum of the range of stiffness values of the adaptive impedance control (i.e. 1060 Nm[$$^{-1}$$]). Thus, it allows to overcome the most difficult segments of the multi-terrain testbed. [] Fig. 8. Limit cycles of the roll and pitch movement projected onto the phase plane for segments A and B of the multi-terrain testbed (based on IMU data). 4 Results and Analysis Adaptive Impedance Control. The resulting CoT of the experiments described in Sect. 3 are displayed in Fig. 7. The difference in CoT in segments A and B can be explained by angular and vertical robot body motion which occurs if walking on flat ground or planar slope with a low stiffness. The CoT reduction of adaptive impedance control is especially high in segment B since the body motion (non-adaptive case) causes instability and slippage on the slope. The additional body motion of the non-adaptive controller is shown in Fig. 8. It can be seen that the limit cycles of roll and pitch movement is reduced by the adaptive controller in segment A and B. During transition from flat ground to planar slope CoT is increased in both sets. On rough terrain (segments C to F) the two sets have similar CoT. This matches expectations as there is no significant difference of virtual stiffness. The adaptive controller reduces the CoT by 23% (segment A), 13% (segment B), 3% (segment C), 10% (segment D), 2% (segment E) and 29% (segment F). This also shows the value of adapting the virtual stiffness by a small amount on rough terrain. Especially in segment F the adaptive controller reduces high CoT spikes which occurs when the robot approaches zero velocity. Step Height Characterization. The step height characterization term [$$h\_a$$] in (3) serves as input for terrain dependant stride height adaptation². As can be seen in Fig. 9 the correction term adds an extra margin to the corrected stride height [$$h\_a$$] during traversal of segment C of the testbed. This segment consists of wooden cuboid blocks and therefore contains horizontal surface patches with vertical transitions. Therefore the system detects long run lengths and high average local variance simultaneously. In all the other segments the correction term is close to zero as either average local variance [$$f\_4$$] or the even run length [$$f\_7$$] is close to zero (no continuous horizontal surfaces). [] Fig. 9. Mean of step height characterisation terms and roughness characterization with the eight runs using adaptive impedance control. [] Fig. 10. Comparison of center line average estimation: proprioceptive (foot tip positions) and exteroceptive (vision based terrain perception). In segment B (planar slope) a higher stride height is desired to be set than during traversal in segment A (flat ground). Corrected step height does not provide discrimination between segments A and B as can be seen in Fig. 9. To achieve this discrimination [$$h\_a$$] can be considered to be complemented by the roughness [$$r\_a$$]. Centerline Average Estimation. In Fig. 10 shows verification results of the visual extraction of center line average. It is measured along the testbed and compared to the proprioceptive estimation of the center line average. The latter is derived from the foot tip positions using forward kinematics [11]. It can be seen that both estimations of the center line average coincide with each other on uneven terrain (segments C to F). There is a constant offset between the two estimations on flat terrain (segment A) and the proprioceptive estimation does not recognize the transition from flat (segment A) to inclined terrain (segment B) because the stiffness is too high to adapt the foot tip positions. 5 Conclusions This works presented a stereo vision based terrain sensing method for a hexapod robot. The system characterizes the terrain in front of the robot and adapts the virtual stiffness of the impedance controller as well as the stride height. The experimental results with the hexapod platform Weaver showed significant efficiency improvements. In particular, the robot managed to efficiently traverse the multi-terrain testbed. Adaptive impedance control showed slightly better performance on very uneven terrain while it significantly lowered CoT for motion on flat and inclined terrain. In addition, body stability was improved by adaptive impedance control as well. The robot chose optimal virtual stiffness values depending on the traversed terrain. Moreover, the feature perception system demonstrated the ability of the presented terrain analysis method to characterize nearly even surface patches which are bordered by steep slopes. Adapting the robot’s stride height accordingly benefits application scenario in which the robot is confronted with man-made structures such as curbs or steps. References 1. Bares, J.E., Whittaker, W.L.: Configuration of autonomous walkers for extreme terrain. Int. J. Robot. Res. 12(6), 535–559 (1993)CrossRef 2. Belter, D., Walas, K.: A compact walking robot - flexible research and development platform. In: Szewczyk, R., Zieliński, C., Kaliczyńska, M. (eds.) Recent Advances in Automation, Robotics and Measuring Techniques. AISC, vol. 267, pp. 343–352. Springer, Heidelberg (2014)CrossRef 3. Görner, M., Wimböck, T., Baumann, A., Fuchs, M., Bahls, T., Grebenstein, M., Borst, C., Butterfass, J., Hirzinger, G.: The DLR-Crawler: a testbed for actively compliant hexapod walking based on the fingers of DLR-Hand II. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1525–1531 (2008) 4. Bellutta, P., Manduchi, R., Matthies, L., Owens, K., Rankin, A.: Terrain perception for DEMO III. In: IEEE Intelligent Vehicles Symposium, pp. 326–331 (2000) 5. Coyle, E., Jr., E.G.C., Roberts, R.G.: Speed independent terrain classification using singular value decomposition interpolation. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4014–4019 (2011) 6. Tsujita, K., Matsuda, M., Masuda, T.: An adaptive locomotion of a quadruped robot on irregular terrain using simple biomimetic oscillator and reflex controllers without visual information. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1358–1363, December 2010 7. Krebs, A., Pradalier, C., Siegwart, R.: Comparison of boosting based terrain classification using proprioceptive and exteroceptive data. In: Khatib, O., Kumar, V., Pappas, G. (eds.) Experimental Robotics. STAR, vol. 54, pp. 93–102. Springer, Heidelberg (2009)CrossRef 8. Brooks, C.A., Iagnemma, K.: Vibration-based terrain classification for planetary exploration rovers. IEEE Trans. Robot. 21(6), 1185–1191 (2005)CrossRef 9. Best, G., Moghadam, P., Kottege, N., Kleeman, L.: Terrain classification using a hexapod robot. In: Australasian Conference on Robotics and Automation (ACRA) (2013) 10. Christie, J., Kottege, N.: Acoustics based terrain classification for legged robots. In: IEEE International Conference on Robotics and Automation (ICRA) (2016) 11. Bjelonic, M., Kottege, N., Beckerle, P.: Proprioceptive control of an over-actuated hexapod robot in unstructured terrain. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2016, to appear) 12. Bellone, M., Reina, G., Giannoccaro, N.I., Spedicato, L.: Unevenness point descriptor for terrain analysis in mobile robot applications. Int. J. Adv. Robot. Syst. 10, 284 (2013)CrossRef 13. Hoffman, R., Krotkov, E.: Terrain roughness measurement from elevation maps. In: Advances in Intelligent Robotics Systems Conference, pp. 104–114 (1990) 14. Aeschimann, R., Borges, P.V.K.: Ground or obstacles? Detecting clear paths in vehicle navigation. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 3927–3934 (2015) 15. Chilian, A., Hirschmüller, H.: Stereo camera based navigation of mobile robots on rough terrain. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4571–4576 (2009) 16. Kolter, J.Z., Kim, Y., Ng, A.Y.: Stereo vision and terrain modeling for quadruped robots. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1557–1564 (2009) 17. Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Academic Press Inc., Orlando (2006)MATH Footnotes 1 Video available here: https://​confluence.​csiro.​au/​display/​ASL/​ISER2016Stereo.   2 Adaptive stride height will be addressed in future work.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_10 Untethered One-Legged Hopping in 3D Using Linear Elastic Actuator in Parallel (LEAP) Zachary Batts¹  , Joohyung Kim¹   and Katsu Yamane¹   (1) Disney Research, 4720 Forbes Ave, Lower Level Suite 110, Pittsburgh, Pennsylvania 15217, USA     Zachary Batts Email: zachary.batts@disneyresearch.com URL: http://www.disneyresearch.com/   Joohyung Kim (Corresponding author) Email: joohyung.kim@disneyresearch.com URL: http://www.disneyresearch.com/   Katsu Yamane Email: kyamane@disneyresearch.com URL: http://www.disneyresearch.com/ Abstract Current and previous single-legged hopping robots are energetically tethered and lack portability. Here, we present the design and control of an untethered, energetically autonomous single-legged hopping robot. The thrust-producing mechanism of the robot’s leg is an actuated prismatic joint, called a linear elastic actuator in parallel (LEAP). The LEAP mechanism comprises a voice coil actuator in parallel with two compression springs, which gives our robot passive compliance. An actuated gimbal hip joint is realized by two standard servomotors. To control the robot, we adapt Raibert’s hopping controller, and find we can maintain balance roughly in-place for up to approx. 7 s (19 hops) while continuously hopping. Keywords HopperLegged-locomotionSpring-massParallel elastic actuatorVoice coil actuator 1 Motivation, Problem Statement, Related Work Legged robots are useful because, among other advantages [1], they can overcome uneven terrain, and can entertain an audience as they act out complex movements (e.g. different gaits). Single-legged robots have the simplest topology in the class of legged systems, and are limited to a hopping gait. Not only do single legged hopping robots provide a simplified testbed for locomotion control algorithms [2], they also demand high-speed, high-force actuation to achieve safe and robust ground-clearance and subject the actuator to greater mechanical stresses than do multi-legged systems. For these reasons, single-legged hopping robots provide an ideal benchmark for actuators used in legged locomotion. The actuation requirements for a single-legged system are so great that to date, to the authors’ best knowledge, no untethered single-legged hopper has achieved continuous hopping without using offboard power. Previous successful hopping robots (e.g. [3]) are tethered to stationary motors and/or power sources to avoid overburdening the robot. Sayyad et al. [4] provide a thorough review of single-legged hopping robots up to 2007, and note portability as a critical stepping stone for commercial applications. The present authors have found no more recent examples of research that have achieved this goal. Here, we attempt to “cut the tether” and create an untethered energetically-autonomous single-legged hopping robot. We employ a linear elastic actuator in parallel (LEAP) [5], previously developed by the present authors, which places a voice coil actuator (VCA) in parallel with compression springs, to act as the primary weight-bearing actuator for our single-legged hopping robot. The parallel configuration lessens the force requirements of the VCA by offloading weight to the spring, and allows the VCA to directly compress or extend the spring, independent of foot contact (in contrast to a series elastic topology). We chose a voice coil as our actuator because it is electrically-powered, is direct-drive, has low moving inertia, has little friction (the coil and body do not make contact), can produce force at high speeds, and has linear force output. These characteristics allow us to power, control, and actuate our robot with onboard batteries, microcontroller, and actuators, respectively. In Sect. 2, we provide a hardware description of our robot, detail our method to estimate center-of-mass velocity, and present our locomotion controller. In Sect. 3, we give an overview of our simulation environment and optimized controller, and present the results of our physical experiment. We discuss the results in Sect. 4. 2 Technical Approach 2.1 Robot Description We designed our robot to be kinematically similar to Raibert’s 3D hopper [2] so that we might use his simple controller as an “off-the-shelf” algorithm to control our robot. Our hopper is an open kinematic chain composed of four links (Fig. 1). The first “torso” link (mass [$$m\_1 = 1.41$$] kg) contains the power source (seven 11.1 V 1300 mAh LiPo batteries), microcontroller (Texas Instruments LAUNCHXL-F28377S), power circuitry, an IMU sensor (Xsens MTi-3-8A7G6-DK) which outputs filtered orientation and velocity increment data at 100 Hz. The second “thigh” link (mass [$$m\_2 = 0.31$$] kg) is composed of two identical geared servomotors (Dynamixel MX-64T) whose axes intersect perpendicularly to realize a (gimbal) hip joint between the torso and third “shank” link (mass [$$m\_3 = 0.52$$] kg). The servomotor positions describe the configuration of the hip joint, which is defined by a roll angle [$$\phi \_1$$] between the torso and thigh, and pitch angle [$$\phi \_2$$] between the thigh and shank. The shank and fourth “foot” link (mass [$$m\_4 = 0.23$$] kg) compose the LEAP mechanism (see [5]), which is an actuated prismatic joint whose displacement is defined by a stroke length ([$$\phi \_3$$]). The IMU frame describes the configuration of the torso floating base, which is defined by a position vector [$$\mathbf {p} = [p\_x, p\_y, p\_z]^T$$] and quaternion vector [$$\mathbf {Q} = [q\_w, q\_i, q\_j, q\_k]^T$$]. The robot’s configuration is fully defined by concatenating the configuration variables into the vector [$$\mathbf {q} = [p\_x, p\_y, p\_z, q\_w, q\_i, q\_j, q\_k, \phi \_1, \phi \_2, \phi \_3]^T$$]. We selected a voice coil model roughly by maximizing work density and stroke (maximum coil displacement) while minimizing price, and selected stock compression springs with spring constants that roughly maximize steady-state hopping height in a simulated 1D environment (see [5] for details). [] Fig. 1. (Left) CAD model of LEAP mechanism with component callouts. (Middle) CAD model of proposed hopping robot. (Right) Photo of assembled hopping robot. 2.2 Center of Mass Velocity Estimation To perform proper foot placement, we must accurately estimate the horizontal components of the center of mass (COM) velocity of the entire robot. To do so, we first estimate the velocity of the IMU frame [$$\mathbf {v}^{imu}\_{t}$$] at time step t, then add the relative velocity of the COM with respect to the IMU. We define the predicted velocity [$$\mathbf {v}^p\_t$$] of the IMU by summing velocity increments with respect to the IMU velocity estimate of the previous time step [$$\mathbf {v}^{imu}\_{t-1}$$] as [$$\begin{aligned} \mathbf {v}^p\_t = \mathbf {v}^{imu}\_{t-1} + \mathbf {\Delta v}\_t \end{aligned}$$] (1) where [$$\mathbf {\Delta v}\_t$$] are velocity increment measurements output from the IMU at time t. We define the update velocity [$$\mathbf {v}^u\_{t}$$] of the IMU by differentiating the forward kinematics of the IMU during stance. Specifically, we treat the IMU as an end-effector of a rooted open-link kinematic chain by assuming the tip of the foot maintains static contact with the ground through a spherical joint. Solving the forward kinematics gives the position of the IMU as a function of IMU orientation and joint angles, concatenated as [$$\mathbf {y} = [q\_w, q\_i, q\_j, q\_k, \phi \_1, \phi \_2, \phi \_3]^T$$], such that the IMU position with respect to the foot is a function of the sensor variables, [$$\mathbf {p}\_t = f(\mathbf {y}\_t)$$]. The update velocity is found by differentiating the IMU position, [$$\begin{aligned} \mathbf {v}^u\_{t} = \frac{d}{dt}(\mathbf {p}\_{t}) = \frac{\partial \mathbf {p}\_t}{\partial \mathbf {y}} \dot{\mathbf {y}} = \mathbf {J}\_{1} \dot{\mathbf {y}} \end{aligned}$$] (2) where [$$\mathbf {J}\_{1} = \mathbf {J}\_{1}(\mathbf {y}\_t)$$] is a standard manipulator Jacobian. We estimate the IMU velocity as a weighted average of the update and predict velocities, [$$\begin{aligned} \mathbf {v}^{imu}\_t = K\_{f} \mathbf {v}^u\_{t} + (1-K\_{f}) \mathbf {v}^p\_t \end{aligned}$$] (3) where [$$K\_{f}$$] is the IMU velocity filter gain. We only estimate velocity during stance phase, when the foot is in contact with the ground. During flight phase, we assume the horizontal (x and y) components of the COM velocity remain constant, and thus do not need to estimate IMU velocity. To find the relative velocity of the COM, we first find the relative position of the COM with respect to the IMU, [$$\mathbf {r}^{com/imu} = \frac{1}{M} (m\_1 \mathbf {r}^{1/imu} + m\_2 \mathbf {r}^{2/imu} + m\_3 \mathbf {r}^{3/imu} + m\_4 \mathbf {r}^{4/imu})$$] where [$$M = m\_1 + m\_2 + m\_3 + m\_4$$] is the total mass of the robot, and [$$\mathbf {r}^{i/imu}$$] is the relative position of the COM of link i with respect to the IMU. Noting that [$$\mathbf {r}^{com/imu} = \mathbf {r}^{com/imu}(\mathbf {y})$$], we differentiate it to find the relative COM velocity, [$$\begin{aligned} \mathbf {v}^{com/imu} = \frac{d}{dt}(\mathbf {r}^{com/imu}) = \frac{\partial \mathbf {r}^{com/imu}}{\partial \mathbf {y}} \dot{\mathbf {y}} = \mathbf {J}\_2 \dot{\mathbf {y}} \end{aligned}$$] (4) Adding this result to the IMU velocity, we estimate the COM velocity at time-step t as [$$\begin{aligned} \mathbf {v}^{com}\_t = \mathbf {v}^{imu}\_t + \mathbf {v}^{com/imu}\_t \end{aligned}$$] (5) 2.3 Modified Raibert Controller Raibert’s 3D hopping controller [2] is intuitive, and comprises three independent components: (1) fixed thrust control during stance, (2) torso orientation control during stance, and (3) foot placement control during flight. Thrust and orientation control are active when contact is detected, which occurs when stroke falls below a set threshold. Foot placement control is active when contact is not detected. First, to provide a fixed thrust during stance, we implement a bang-bang controller, which works to inject energy into the system. The controller commands maximum negative voltage to the VCA during leg compression, and maximum positive voltage to the VCA during extension, which ensures the VCA always performs net positive work. Second, the controller servos global pitch and roll angles of the torso ([$$\theta ^P$$] and [$$\theta ^R$$] respectively) to zero during stance using proportional control, such that the commanded pitch and roll joint torques are [$$f^{st}\_1 = K^{st}\_{p1}\theta \_R$$] and [$$f^{st}\_2 = K^{st}\_{p2}\theta \_P$$], respectively, where [$$K^{st}\_{p1}$$] and [$$K^{st}\_{p2}$$] are proportional gains. We don’t use a derivative term since the D-gain of the built-in PID control of the Dynamixel servomotors has no effect on the motion. Third, the foot placement controller calculates the desired foot placement with respect to the center of mass, which is tracked by an inverse kinematics (IK) controller. Specifically, the desired horizontal foot placement with respect to the COM, [$$\mathbf {r}^{f/com,d}\_{x,y} = [x^{f/com,d},y^{f/com,d}]^T$$] is a function of the expected stance time [$$T\_{st}$$], horizontal components of the COM velocity [$$\mathbf {v}^{com}\_{x,y} = [\dot{x}^{com},\dot{y}^{com}]^T$$], and desired horizontal velocity [$$\mathbf {v}^{com,d}\_{x,y} = [\dot{x}^{com,d},\dot{y}^{com,d}]^T$$], [$$\begin{aligned} x^{f/com,d} = \frac{\dot{x}^{com} T\_{st}}{2} + K\_x(\dot{x}^{com} - \dot{x}^{com,d}) \end{aligned}$$] (6) [$$\begin{aligned} y^{f/com,d} = \frac{\dot{y}^{com} T\_{st}}{2} + K\_y(\dot{y}^{com} - \dot{y}^{com,d}) \end{aligned}$$] (7) where [$$K\_x$$] and [$$K\_y$$] are the foot placement gains. Both hip servomotors are used to track the desired foot placement using an IK tracker, which is derived as follows. The horizontal position of the foot with respect to the COM, [$$\mathbf {r}^{f/com}\_{x,y} = [x^{f/com},y^{f/com}]^T$$] is differentiated as [$$\begin{aligned} \dot{\mathbf {r}}^{f/com}\_{x,y} = \left[ \begin{array}{c} \dot{x}^{f/com} \\ \dot{y}^{f/com} \end{array} \right] = \frac{\partial \mathbf {r}^{f/com}\_{x,y}}{\partial \mathbf {y}} \dot{\mathbf {y}} = \mathbf {J}\_3 \left[ \begin{array}{c} \dot{\phi \_1} \\ \dot{\phi \_2} \end{array} \right] + \mathbf {J}\_4 \left[ \begin{array}{c} \dot{\mathbf {Q}} \\ \dot{\phi \_3} \end{array} \right] \end{aligned}$$] (8) where [$$\mathbf {J}\_3 = \frac{\partial \mathbf {r}^{f/com}\_{x,y}}{\partial \mathbf {y}\_1}$$] and [$$\mathbf {y}\_1 = \left[ \begin{array}{c} \dot{\phi \_1} \\ \dot{\phi \_2} \end{array} \right] $$]. We can solve for [$$\left[ \begin{array}{c} \dot{\phi \_1} \\ \dot{\phi \_2} \end{array} \right] $$] as [$$\begin{aligned} \left[ \begin{array}{c} \dot{\phi \_1} \\ \dot{\phi \_2} \end{array} \right] = \mathbf {J}\_3^{-1} \left[ \left[ \begin{array}{c} \dot{x}^{f/com} \\ \dot{y}^{f/com} \end{array} \right] - \mathbf {J}\_4 \left[ \begin{array}{c} \dot{\mathbf {Q}} \\ \dot{\phi \_3} \end{array} \right] \right] \end{aligned}$$] (9) We can use (9), assuming [$$\dot{\mathbf {Q}}$$] and [$$\phi \_3$$] are zero to simplify the controller and reduce compution time, to derive desired hip joint positions, [$$[\phi \_1^d, \phi \_2^d]^T$$], given the foot placement tracking errors, [$$\varDelta x^{fp} = x^{f/com,d} - x^{f/com}$$] and [$$\varDelta y^{fp} = y^{f/com,d} - y^{f/com}$$], as [$$\begin{aligned} \left[ \begin{array}{c} \phi \_1 \\ \phi \_2 \end{array} \right] ^d = \left[ \begin{array}{c} \phi \_1 \\ \phi \_2 \end{array} \right] + \eta \mathbf {J}\_3^{-1} \left[ \begin{array}{c} \varDelta x^{fp} \\ \varDelta y^{fp} \end{array} \right] \end{aligned}$$] (10) where [$$\eta $$] is an IK tracking gain (i.e. step size). The desired hip joint positions are tracked with a proportional servo [$$f^{fp}\_1 = K^{fp}\_{p1}(\phi \_1^d - \phi \_1)$$] and [$$f^{fp}\_2 = K^{fp}\_{p2}(\phi \_2^d - \phi \_2)$$], where [$$K^{fp}\_{p1}$$] and [$$K^{fp}\_{p2}$$] are proportional gains. The Jacobians [$$\mathbf {J}\_1$$], [$$\mathbf {J}\_2$$], and [$$\mathbf {J}\_3$$] are derived using Matlab Symbolic Toolbox. 3 Results 3.1 Simulation We first developed a simulation to test, tune, and debug our state estimator and controller before implementing on hardware. The simulation was created using Matlab/Simulink/SimMechanics/SimScape software. The LEAP actuator and ground contact models were reused from our previous work [5]. We model Coulomb and viscous friction at the hip joints as well as torque-velocity constraints. We run the controller at 100 Hz (same as on hardware) and simulate the system with a variable time-step solver (ode45, relative error tolerance: 1e-4, absolute error tolerance 1e-5). The geometric and inertial parameters of the links were estimated from CAD (mass was measured on a scale). The spring constant of the LEAP mechanism was roughly optimized in a 1D simulation given our system mass (see [5]), and two stock springs of similar stiffness (2060 N/m total) were selected and installed in our leg. We avoid reporting all simulation parameters here due to space constraints. [] Fig. 2. Plots of simulated hopping data for 5 s. Gray represents positive contact detection, occurring when stroke drops below the contact threshold. From top to bottom are plots of: (1) X-component of IMU velocity, estimated and actual. All the velocity estimates are held constant during flight. (2) Y-component of IMU velocity, estimated and actual. (3) X component of COM velocity estimate, estimated and actual. (4) X component of COM velocity estimate, estimated and actual. (5) Roll joint angle and desired angle. (6) Pitch joint angle and desired angle. (7) Displacement of voice coil (stroke) and contact threshold. (8) Commanded voice coil voltage. Sensor signals include joint positions and velocities, IMU orientation quaternion and quaternion derivative (approximated discretely), and IMU acceleration. These signals are quantized and discretized to roughly match our hardware, and we inject Gaussian noise based on data reported by the sensor datasheets. We use the covariance matrix adaptation evolution strategy (CMA-ES) [6] to optimize the control parameters for maximum hopping time before fall. We present 5 s of simulated hopping data from the optimized controller in Fig. 2. The resulting optimized controller could hop for roughly 60 s before falling. 3.2 Physical Experiment In our physical setup, the robot was attached to a slack safety harness and maintained a serial connection to the host computer for data logging. No power was transmitted over these mechanical and data connections. A motion capture (MOCAP) system (Vicon MX series, 16 cameras, 120 fps) is used to record “ground truth” velocity estimates. MOCAP markers are placed in known locations on the torso ([$$m\_1$$]) and foot ([$$m\_4$$]) links. The MOCAP system provides position-time data for these markers, from which we can calculate positions and velocities of the tip of the foot and IMU. Here we present data from a hopping experiment for a single trial. For the same trial, we present plots (Fig. 3) of estimated and measured (from MOCAP) global-frame IMU velocity, estimated COM velocity, measured and desired pitch and roll servo angles ([$$\phi \_1$$] and [$$\phi \_2$$]), stroke length ([$$\phi \_3$$]), commanded voice coil voltage, and contact detection. We captured data until an operator intervened to prevent an imminent fall. We recorded data for 50 trials, and found an average and maximum hopping time of approx. 3.3 and 6.5 s, respectively. A video camera captured snapshots of a separate experiment (Fig. 4) at 29 fps (shown every other frame). [] Fig. 3. Plots of hopping data for a single trial. Gray represents positive contact detection, occurring when stroke drops below the contact threshold. From top to bottom are plots of: (1) X-component of IMU velocity, estimated and from motion capture (MOCAP) data. The estimate is held constant during flight. (2) Y-component of IMU velocity, estimated and from MOCAP data. (3) horizontal (x and y) components of COM velocity estimate. (4) Roll joint angle and desired angle. (5) Pitch joint angle and desired angle. (6) Displacement of voice coil (stroke) and contact threshold. (7) Commanded voice coil voltage. [] Fig. 4. Snapshots of a hopping experiment trial. Video was recorded at 29 fps, and presented every other frame. The shots are sequenced left to right, top to bottom. 4 Discussion 4.1 Velocity Estimation Accurate velocity estimation is critical to the performance of our controller. Our current estimator performs rather poorly, as the IMU velocity plots in Fig. 3 show. There are a couple reasons for this. First, our model assumes static foot contact with the ground, despite the existence of slip and deformation of our rubber foot. Second, our sensors are imperfect, and contain quantization error and noise, among other inaccuracies. We could improve velocity estimation with better and/or redundant sensors, or with a better model. Our current model is purely kinematic. A better approach might comprise an unscented Kalman filter, using a forward dynamics model. Another approach might use a learning model to approximate our ground truth MOCAP velocity data. 4.2 Hopping Controller Our current controller might be improved by relaxing the static torso/static [$$q\_3$$] joint assumptions. Alternatively, a momentum-based controller, which takes into account the robot’s constant angular momentum during flight, might compensate for the large and unwieldy torso-to-leg inertia ratio. The bang-bang thrust controller is inefficient, and causes the voice coil to overheat if used for extended periods of time. A more efficient controller, which exploits velocity-efficiency characteristic of the VCA, might achieve the same performance with less energy consumption. 4.3 Hardware There are many hardware improvements that would likely improve hopping performance of our robot. First, the most pressing issue is the relatively large leg inertia, which limits foot-placement control authority during flight. Increasing torso inertia would be the simplest way to overcome this issue, but would increase the load on the [$$q\_3$$] actuator. Leg inertia has already been minimized, and would be difficult to further decrease. Second, the servomotors of the hip joint might be replaced by direct drive motors. Our current servomotors cannot perform accurate torque control, lack an effective derivative term in their PID control loop, and contain gearbox backlash. Direct-drive motors may perform better. The authors note the Delta Hopper robot (mentioned briefly in [7]) as an alternative to our design. Its single leg is a parallel 3-dof mechanism with large-radius direct drive motors at the hip. Such a design allows for torque-based control methods, decreases leg inertia, and would likely be more successful in continuous hopping. Third, computational power could be improved by using a more powerful microcontroller, or a mini computer (e.g. Odroid or Raspberry Pi), which would enable more complex estimation and control algorithms, as mentioned previously. Finally, active cooling of the voice coil, with a fan or other cooling system, may lessen the problem of overheating. 4.4 Future Work and Conclusions In this paper, the design and control of an untethered singled-legged hopping robot was presented. Our simulations and experiments have shown that the LEAP mechanism can be employed as an actuator for the robot. It can likely be used in other robot designs as well, especially those that aren’t as energetically demanding. While we fell short of our goal of continuous, indefinite hopping, we showed that such a gait is possible for an untethered robot for short periods of time. In the future, we plan to implement many of the previously proposed changes. Furthermore, we plan to redesign the LEAP mechanism to be more modular and compact for use in a multi-legged robot. References 1. Hardarson, F.: Locomotion for Difficult Terrain. Citeseer (1998) 2. Raibert, M.H., Brown, H.B., Chepponis, M.: Experiments in balance with a 3d one-legged hopping machine. Int. J. Rob. Res. 3(2), 75–92 (1984)CrossRef 3. Zeglin, G., Brown, B.: Control of a bow leg hopping robot. In: Proceedings of the 1998 IEEE International Conference on Robotics and Automation, vol. 1, pp. 793–798. IEEE (1998) 4. Sayyad, A., Seth, B., Seshu, P.: Single-legged hopping robotics research-a review. Robotica 25(05), 587–613 (2007)CrossRef 5. Batts, Z., Kim, J., Yamane, K.: Design of a hopping mechanism using a voice coil actuator: Linear elastic actuator in parallel (leap). In: 2016 IEEE International Conference on Robotics and Automation (ICRA), May 2016 6. Hansen, N.: The CMA evolution strategy: a comparing review. In: Lozano, J.A., Larrañaga, P., Inza, I., Bengoetxea, E. (eds.) Towards a New Evolutionary Computation. STUDFUZZ, vol. 192, pp. 75–102. Springer, Heidelberg (2006)CrossRef 7. Kenneally, G., De, A., Koditschek, D.: Design principles for a family of direct-drive legged robots. IEEE Rob. Autom. Lett. 1(2), 900–907 (2016)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_11 Discrete Foot Shape Changes Improve Dynamics of a Hopping Robot Fabio Giardina¹   and Fumiya Iida¹   (1) Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK     Fabio Giardina (Corresponding author) Email: ffg20@cam.ac.uk URL: http://divf.eng.cam.ac.uk/birl   Fumiya Iida Email: fi224@cam.ac.uk URL: http://divf.eng.cam.ac.uk/birl Abstract Legged locomotion is characterised by a repetitive appearance of impulsive ground collisions which are strongly influencing the locomotion behaviour. The collisions depend on the shape of the contacting foot, but little is known on how the foot needs to be shaped to assist stable and fast locomotion. This paper investigates discrepancies in locomotion dynamics caused by a discrete foot shape change. A curved foot, open-loop controlled hopping robot which can be switched between two foot shape states was built and tested for the experimental investigations. The results indicate that the right timing of foot shape change can induce a variety of locomotion gaits and increase maximal speed by up to 40%, without the shape change doing any positive work on the robot. Three distinct take off cases were identified which depend on the robot’s state and foot shape. The switching between the cases in consecutive hops can explain the observed behaviour qualitatively as presented in this paper. 1 Introduction In legged locomotion, impacts of a moving body with the ground are well approximated with impulsive forces, especially after McGeer discovered passive dynamic walking in rigid machines [1]. Since then, researchers have found simple models to explain various behaviours in animal locomotion using impact inducing collision models, e.g. in walking [2] or hopping [3]. Impacts are usually considered unavoidable, yet undesirable as they are inherently coupled to mechanical energy loss. Although there are theoretical studies that show cases of legged locomotion without collisional energy loss [4], every legged animal and robot undergoes some loss due to impact in real systems. In fact, it was found that the energetic cost for human like walking is mainly due to the impulsive impacts in the step-to-step transition [5]. Minimising the impact losses can then be achieved by applying toe-off impulses just before the step transition [6, 7]. A detailed collisional analysis of a simple model by Ruina et al. suggests that multiple impacts during the stance phase reduces the energetic loss, due to a sequenced redirection of the centre of mass [8]. Stability considerations in a simple double pendulum model revealed that impacts provide essential stabilising effects which cannot be induced otherwise, e.g. the skipping of unstable portions of the phase portrait [9, 10]. Even though mathematical tools exist to analyse the influence of impulsive forces in mechanical systems, e.g. by means of the impulse extended Lyapunov function [11], it is hard to define design rules for legged systems due to the convoluted dependency of dynamics and morphology. Nevertheless, a simple analysis of a bipedal model shows that a flat or round foot shape improves energy efficiency over a point foot model [12], which was also concluded in a study with human subjects [13], where the authors point out that the rolling like behaviour of the centre of pressure progression in human walking is beneficial for the centre of mass redirection in terms of energy efficiency. From this perspective, it is important to carefully design the morphology as a function of impact losses, for which a mathematical method is presented in [14]. Furthermore, it might also be beneficial to change the foot shape during the locomotion gait to adapt to impacts to which the pronation of the human foot just before touchdown might hint [15]. Shape changing locomotive robots have been studied in the past, such as the contour changing wheel [16], yet we had to acknowledged that the role of shape induced impacts in robotics is an understudied topic. [] Fig. 1. The robot used to investigate influence of shape changes in hopping on locomotion speed and table with main mechanical parameter values. In this paper we are investigating how discrete and controlled shape changes in a hopping robot can alter locomotion properties, and use forward speed as the main measure for evaluation. The next section presents the used system and methods to study the influence of shape induced impacts on locomotion, results are illustrated in Sect. 3, Sect. 4 contains an analysis of the main findings, and the last Sect. 5 concludes the paper. 2 Methods To analyse the influence of shape change during locomotion, a curved foot hopping robot with two linked rigid bodies was built, as is shown in Fig. 1. This system is driven by a motor torque in the joint between the rigid bodies, generated by a motor on the upper body tip. Two linear extension springs are placed between the upper and lower body to achieve parallel elastic actuation. The robot is equipped with a curved foot shape, which has proven to show good performance in both stability and efficiency [17]. The foot is made out of plywood and is operating on a wooden floor. The main geometrical properties of the robot can be found in the table depicted in Fig. 1. In order to induce a foot shape change during locomotion, a small mechanical structure, from now on referred to as cane, is placed in the front tip of the foot which can be either extended or retracted, and hence switch between two discrete shape states. When extended, the front part of the curved foot is bypassed during rolling due to the blocking cane, altering the dynamics and therefore locomotion behaviour. When the cane is retracted the robot moves as if no cane was there to influence locomotion. [] Fig. 2. Main motor and cane open-loop control as a function of time. T is the period of the assigned main motor control frequency, CD is the delay as a fraction of the period, and CDC is the cane duty cycle within the period. The main motor is controlled by an open-loop signal which induces a motor torque in the robot joint approximated by a bi-directional pulse as shown in Fig. 2. For a curved foot hopping robot, this type of signal has shown to be better in terms of stability and efficiency as compared to a sinusoidal signal [18]. The motor torque is applied first in positive direction for 100 ms and then in negative direction for another 100 ms with respect to the lower body (causing the robot to contract first and then extend) with a torque amplitude of [$$\pm 0.4\,Nm$$]. The period time T is given by the applied control frequency. The cane is being extended according to the cane duty cycle CDC in synchronisation with the motor torque frequency for [$$CDC\cdot T$$] seconds and after a delay of [$$CD\cdot T$$] seconds. The influence of the cane at various times is tested by varying CDC and CD for constant open-loop control parameters of the motor. The progression of the robot for two extreme cases with the cane being either permanently retracted or extended is shown in a series of pictures in Fig. 3. It is important to note that the cane is designed not to do any positive work on the robot when in contact with the ground, but to induce only plastic collisions. The only energy needed to operate the cane is for retraction and extension during non-ground contact phase, which is assumed to be negligible. The robot is driven by a 70W Maxon EC 45 flat motor and is controlled via a Roboteq SBL 1360 motor controller. The cane is being retracted with a Parallax 6V standard servo and a linear spring is pulling it back to the extended position if the servo motor is disabled. The robot is untethered and powered by three lithium polymer batteries, providing 24 V. Radio modules are installed to establish communication with the host pc, and an Arduino Mega 2560 micro controller coordinates the operation. The motion is tracked using 6 reflective markers placed on the robot which are being recorded by an OptiTrack motion capturing system. Trajectories are also being evaluated from video analysis using the software Kinovea. [] Fig. 3. Robot progression over one period T of the main motor control for the case with retracted cane and extended cane. 3 Results The following results are shown for a main motor control frequency of 2.8 Hz, and a pulsed motor torque of [$$\pm 0.4\,Nm$$] for a 10 s run per experiment. It is important to note that the only difference in the remaining report is induced by the timing of the passive cane, not the main motor control. [] Fig. 4. Trajectories of upper body top marker for a retracted cane (a), and extended cane (b) for the same main motor control. The grey circles indicate the main motor timing with period T. Figure 4 shows the trajectory of the uppermost tracked marker on the upper body of the robot for different cane control. Figure 4(a) shows the behaviour with retracted cane throughout the run, and Fig. 4(b) the behaviour with permanently extended cane. The grey circles in both plots indicate the start of a new pulse cycle of the main motor. For the retracted cane case, the dynamics suggest a period-1 behaviour, meaning that the trajectory reaches the initial state after T seconds with respect to the main motor actuation period. For the extended cane, another regular pattern emerges, although the system state returns to identical values only after 2T. This period-2 behaviour was found to be slightly slower than the cane-less period-1 motion, which might be explained by the “looping” of the trajectory, i.e. the backward motion of the tracked marker. Figure 5 illustrates the locomotion distance covered as a function of the cane timing. The abscissa shows the cane duty cycle CDC and cane delay per period CD in the format CDC / CD that was applied to the run. A value of 0.5 / 0.2 for example indicates a duty cycle of 0.5 and a cane delay of 0.2T. Figure 5(a) shows the cane timing for duty cycles of 0.5 and delays between 0 and 0.5T, whereas Fig. 5(b) illustrates cane duty cycles of 1 with delays ranging from 0 to T. The ordinate shows the average travelled distance per hop, which is proportional to the average hopping speed. The performance of the cane being permanently on or off is indicated with a solid grey or black line, respectively. Note that the robot was initiated from a resting position, and the average hopping distance includes also the initial transient phase. The series of experiments was repeated five times, and the figure shows error bars of one standard deviation around the average. [] Fig. 5. Averaged travelled distance per hop over a 10 s run for different cane timings for open-loop control with [$$\pm 0.4Nm$$] motor torque, and 2.8 Hz actuation frequency. The cane timing on the abscissa is indicated by cane duty cycle CDC over cane delay per period CD in the format CDC / CD. The error bar indicates one standard deviation for the five sets of experiments that were conducted. The results show how for constant main motor control, the travelled distance can be altered significantly by the timing of the cane. If the cane has a duty cycle of 0.5, the hopping distance is increased for almost any cane delay. The peak velocity is reached when the cane is turned on after a delay of 0.1T, covering a distance of around 8.4 cm per hop, which is around [$$\frac{1}{4}$$] body length per hop. This corresponds to a speed increase of roughly [$$40\%$$] compared to the cases of the cane being permanently on or off. If the cane is operated only every second period, which is for duty cycles and delays per period of [$$CDC+CD \ge 1$$], the performance decreases generally compared to the [$$0.5\,CDC$$] case and is most of the time even lower than cases of the cane being permanently on or off. 4 Analysis The apparent increase of hopping speeds as a function of the cane timing may seem surprising given that there is no positive work being done by the cane, but only passive and energy consuming impacts are induced through the discrete foot shape change. In order to understand what is happening in the collisional process one needs to consider the actual cane contact. Figure 6 shows the trajectory without the initial transient phase of the upper body top marker for the fastest cane control with [$$CDC\,0.5$$] and [$$CD\,0.1$$]. The parts of the plotted trajectories which are grey indicate contact of the cane with the ground which was extracted visually. It is interesting to note that the “looping” of the upper body trajectory is somehow being avoided by the cane timing of the fast cane control. This is even more surprising, as the motion seems to be rather chaotic and no distinct periodicity can be observed. One might expect the looping to occur at least once, but it was not observed in any of the five trials. [] Fig. 6. Trajectory of the upper body top marker and cane ground contact times for the fastest case with duty cycle of 0.5 and delay of 0.1T. After analysing the resulting motion of the robot in the captured trajectories, we identified three occurring cases just before take off which seem to distinctly define the subsequent dynamics. In order to simplify understanding, we are making use of the wheel with eccentric point mass model as is presented in [3] and illustrations of the three cases are depicted in Fig. 7. The first case (a) is naturally emerging in the stable cane-less hopping motion and is characterised by a leading ground contact point to the centre of mass with respect to the direction of travel just before take off. The impulsive motor torque then causes a backward rotation during flight phase and the robot lands with a leading centre of mass position relative to the ground contact point. This causes the robot to roll in forward direction after touchdown. Interestingly, the point of touchdown is naturally adjusted such that the same take off posture is achieved after the rolling phase in every hopping iteration, hinting to self-stable characteristics of the system. Case (b) occurs with an extended cane and a forward rolling angular velocity [$$\dot{\phi }$$] which is negligible. The ground contact point and centre of mass of the robot are roughly aligned with respect to the travelling direction. The take off pulse causes a strong backward rotation, shifting the point of touchdown further back than in case (a) and hence inducing accelerated rolling in the next hopping iteration. Lastly, case (c) is observed with an extended cane and a forward rolling angular velocity [$$\dot{\phi }\gg 0$$]. In this case, the impulsive impact of the cane with the ground induces a rotation of the centre of mass around the point of cane contact, which promotes a ballistic trajectory that is favourable for a long jump. This behaviour is similar to pole vaulting, where the athlete is using the pole’s contact point as a centre of rotation to surpass a raised bar. Due to the ballistic effect, the pulsed actuation torque can only cause a slight backward rotation, leading to a small distance of point of touchdown and centre of mass in travelling direction. This means that the gained rolling speed during stance phase is the smallest of the three presented cases. [] Fig. 7. Three cases of observed take off positions using a wheel with eccentric point mass model presented in [3]. The grey filled circles indicate the centre of mass of the robot model, and the black arrows show the main take off motion after the pulsed motor torque was applied. [] Fig. 8. Transient trajectories of the upper body top marker for the case with the cane being permanently on, and the case with duty cycle of 0.5 and delay of 0.1T. The three shown cases can be used to explain the observed discrepancies in locomotion trajectories. As was already explained, case (a) causes the cane-less period-1 motion, which naturally emerges after a few transient hops. The period-2 motion, as shown for example in Fig. 4(b) with the cane being permanently engaged, is explained by a switching between cases (b) and (c). The high rolling velocity [$$\dot{\phi }\gg 0$$] in case (b) after touchdown causes the pole vaulting effect seen in case (c), and the small rotational retraction in case (c) then causes a small angular rolling velocity [$$\dot{\phi }\approx 0$$], which in turn gives rise to case (b) in the next hopping iteration. Now, how can this simple model explain the trajectories observed in Fig. 6(b) for the [$$CDC/CD=0.5/0.1$$]? We observed that this cane control causes the robot to operate mostly in case (c), the pole vaulting mode. Whenever the decelerating case (b) is about to be induced by a previous case (c), the cane is being blocked by the ground and can not extend due to the body posture and previous retraction of cane, which really induces the faster cane-less case (a) instead of (b). The robot naturally chooses the best option for increased locomotion speed with this control and avoids case (b) completely, only operating in (c) and switching to (a) in some extreme cases. No period-2 motion is observed as the cane extends not always quite to its full extent before ground contact occurs, leading to a slightly different and chaotic behaviour. Figure 8 compares the transient phase of the best control case [$$CDC/CD=0.5/0.1$$] and the case with permanently engaged cane. The first appearance of the blocking cane, inducing the state (a) instead of (b) and avoiding the decelerating looping trajectory, is indicated as well. 5 Conclusion In this paper we presented that a discrete change in foot shape during locomotion of an open-loop controlled hopping robot can induce a variety of locomotion gaits and increase the travelling speed. The foot shape is passive and does not do any positive work on the robot, but influences the time and direction of dissipative impacts. With a simple model, three distinct cases were identified just before take off, which define how the pulsed torque influences the touchdown posture. The fastest locomotion speed can only be achieved by switching between the accelerating cases and avoiding the decelerating case, which can be realised by the right timing of shape change. The presented insights may provide a new perspective for the development of control laws for increased locomotion stability, efficiency and speed through foot shape changes. References 1. McGeer, T.: Passive dynamic walking. Int. J. Robot. Res. 9(2), 62–82 (1990)CrossRef 2. Garcia, M., et al.: The simplest walking model: stability, complexity, and scaling. J. Biomech. Eng. 120(2), 281–288 (1998)CrossRef 3. Giardina, F., Iida, F.: Simulation of forward hopping dynamics in robots and animals using a template with a circular foot and impulsive actuation. In: 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (2016) 4. Gomes, M., Ruina, A.: Walking model with no energy cost. Phys. Rev. E 83(3), 032901 (2011)MathSciNetCrossRef 5. Kuo, A.D., Maxwell, J.D., Ruina, A.: Energetic consequences of walking like an inverted pendulum: step-to-step transitions. Exerc. Sport Sci. Rev. 33(2), 88–97 (2005)CrossRef 6. Kuo, A.D.: Energetics of actively powered locomotion using the simplest walking model. J. Biomech. Eng. 124(1), 113–120 (2002)MathSciNetCrossRef 7. Choi, J.H., Grizzle, J.W.: Feedback control of an underactuated planar bipedal robot with impulsive foot action. Robotica 23(5), 567–580 (2005)CrossRef 8. Ruina, A., Bertram, J.E., Srinivasan, M.: A collisional model of the energetic cost of support work qualitatively explains leg sequencing in walking and galloping, pseudo-elastic leg behavior in running and the walk-to-run transition. J. Theor. Biol. 237(2), 170–192 (2005)MathSciNetCrossRef 9. Hurmuzlu, Y., Moskowitz, G.D.: The role of impact in the stability of bipedal locomotion. Dyn. Stab. Syst. 1(3), 217–234 (1986)MATH 10. Hurmuzlu, Y., Moskowitz, G.D.: Bipedal locomotion stabilized by impact and switching: I. two-and three-dimensional, three-element models. Dyn. Stab. Syst. 2(2), 73–96 (1987)MATH 11. Pavlidis, T.: Stability of systems described by differential equations containing impulses. IEEE Trans. Autom. Control 12(1), 43–45 (1967)MathSciNetCrossRef 12. Kwan, M., Hubbard, M.: Optimal foot shape for a passive dynamic biped. J. Theor. Biol. 248(2), 331–339 (2007)CrossRef 13. Adamczyk, P.G., Kuo, A.D.: Mechanical and energetic consequences of rolling foot shape in human walking. J. Exp. Biol. 216(14), 2722–2731 (2013)CrossRef 14. Mu, X., Wu, Q.: On impact dynamics and contact events for biped robots via impact effects. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 36(6), 1364–1372 (2006)CrossRef 15. Perry, S.D., Lafortune, M.A.: Influences of inversion/eversion of the foot upon impact loading during locomotion. Clin. Biomech. 10(5), 253–257 (1995)CrossRef 16. Mellinger, D., Kumar, V., Yim, M.: Control of locomotion with shape-changing wheels. In: IEEE International Conference on Robotics and Automation, ICRA 2009 (2009) 17. Gunther, F., Giardina, F., Iida, F.: Self-stable one-legged hopping using a curved foot. In: 2014 IEEE International Conference on Robotics and Automation (ICRA) (2014) 18. Hunt, J., Giardina, F., Rosendo, A., Iida, F.: Improving efficiency for an open-loop-controlled locomotion with a pulsed actuation. IEEE/ASME Trans. Mechatron. 21(2), 1581–1591 (2016)CrossRef Grasping 1 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_12 Learning Grasps in a Synergy-based Framework Fanny Ficuciello¹  , Damiano Zaccara¹   and Bruno Siciliano¹   (1) Dipartimento di Ingegneria Elettrica e Tecnologie dell’Informazione, Università degli Studi di Napoli Federico II, via Claudio 21, 80125 Napoli, Italy     Fanny Ficuciello (Corresponding author) Email: fanny.ficuciello@unina.it   Damiano Zaccara Email: damianozaccara@libero.it   Bruno Siciliano Email: bruno.siciliano@unina.it Abstract In this work, a supervised learning strategy has been applied in conjunction with a control strategy to provide anthropomorphic hand-arm systems with autonomous grasping capabilities. Both learning and control algorithms have been developed in a synergy-based framework in order to address issues related to high dimension of the configuration space, that typically characterizes robotic hands and arms with human-like kinematics. An experimental setup has been built to learn hand-arm motion from humans during reaching and grasping tasks. Then, a Neural Network (NN) has been realized to generalize the grasps learned by imitation. Since the NN approximates the relationship between the object characteristics and the grasp configuration of the hand-arm system, a synergy-based control strategy has been applied to overcome planning errors. The reach-to-grasp strategy has been tested on a setup constituted by the KUKA LWR 4+ Arm and the SCHUNK 5-Finger Hand. Keywords Postural SynergiesSupervised LearningHand-arm Anthropomorphic Systems 1 Introduction Grasp control of high Degree-of-Freedom (DoF) devices in unstructured environment presents several difficulties such as the need to have a good model of the world and to develop a reliable and smart strategy in the case of underactuated devices and redundant kinematics. The human being is a good example to learn how to perform efficiently prehension tasks. For this purpose human observation is the first issue to be addressed. In this work, motion tracking strategies using vision and a bio-kinetic suite have been used for the hand and the arm to learn grasping by imitation. Supervised learning based on Multiple Neural Networks (MNN) has been adopted in a synergy-based framework to generalize the results obtained with imitation learning. To overcome the limits of the MNN in generalizing grasps it is of great interest to use control strategies together with learning strategies. The main idea is to use the control strategy, developed in [1], to optimize the execution of planned grasps synthesized in the synergies subspace. The synergy coefficients corresponding to the final grasp configuration learned by human imitation are used to train the artificial neural network and, in turn, to generalize grasp planning of unknown objects. The KUKA LWR 4+ Arm has been used to perform the reaching phase towards the object, because its human-like kinematics allows replicating the human behavior accurately. 2 Technical Approach and Motivation The learning strategy relies on dimensionality reduction of the configuration space of both the hand and the arm, and it is based on the imitation of human hand-arm motion during the execution of reach-to-grasp tasks commonly performed in daily life. The supervised learning algorithm has the role of generalizing the reach-to-grasp tasks, learned by imitation, to different object, grasp type and different environment conditions, such as different shape and dimension of the object as well as different orientation and position with respect to a defined frame in the workspace. In order to learn by imitation, a mapping method of the human hand-arm motion to the robotic system is needed. This procedure is necessary to reproduce the configuration on the robotic system as close as possible to the human reference. Once a variety of hand-arm configurations, chosen to cover a complete grasping taxonomy [2], have been mapped and stored in a data base of robot grasps, it is possible to compute the synergy subspaces of the hand and of the arm. Afterwards, dimesionality reduction will be used to make possible the application of NN supervised learning to high-DoFs devices. Indeed, synergies reduce the search space of the learning algorithm ensuring convergence and performance and allowing generalization from mimicked examples. The hand and arm synergies subspaces have been computed independently and in two steps. 2.1 Experimental Setup The robotic system used for the experimental tests is constituted by the SCHUNK 5-Finger Hand (S5FH) [3] and the KUKA LWR 4+ Arm. The hand possesses 20 degrees of mobility and it is designed with mechanical synergies that regulate the kinematic couplings between the finger joints while decreasing the number of motors from 20 to 9. Thearm has 7 DoFs, thus it is one-degree redundant like the human arm. The Robot Operating System (ROS) is used to control both the SCHUNK 5-Finger Hand and the KUKA LWR 4+ Arm. A SVH Driver suite has been developed by Forschungszentrum Informatik (FZI) for the low-level interface and enables an easy control of the hand using a customized library written in C++, while the KUKA LWR 4+ Arm is controlled by means of the FRI library. For the motion acquisition, commercial low-cost RGB+Depth (RGBD) camera, such as the Kinect from Microsoft Corp., has been used for 3D human hand fingertips detection. For the arm, the Xsens MVN suite motion capture system has been used. It consists essentially of 17MTx inertial and magnetic measurement units and comprises 3D gyroscopes, 3D accelerometers and 3D magnetometers sensors through which it is possible to obtain the position measurement and orientation of parts of the body of the wearer. 2.2 Methods for Observation and Synergies Computation Different methods can be used for synergies computation. The first issue to address consists in evaluating the more effective solution between two separated synergies subspaces for the hand and the arm rather than the whole subspace. As a matter of fact, the hand and the arm have two different workspaces, i.e. the hand is a redundant branched device and presents a behavior (small motions) different from that of the arm (large motions) involving not only different joint motions and velocities, but also different inertia and kinematics. Furthermore, despite the smaller workspace, the hand has the possibility to take a higher number of combinations of joint values than the arm, thus the motions related to the various grasps are differentiated from each other and may require a greater number of synergies to be reproduced while ensuring a small error. According to this, we have decided to compute two separated synergies subspace for dimansionality reduction. Starting from synergy-based planning and control algorithms, developed for anthropomorphic hands [1, 4, 5] an incremental work to extend previous studies to the arm has been addressed. 2.3 The Hand A data set of grasps, measured on five human subjects and available from [6], is used. For this purpose, a synergies Jacobian can be computed and suitably used in the Closed-Loop Inverse Kinematics (CLIK) algorithm to map the grasps from the human hand to the robotic hand. The method developed in [6] has been adapted and tested to evaluate the first three synergies on the S5FH under-actuated five-fingered hand. The details of the grasping data and mapping method can be found in [1, 6]. 2.4 The Arm In order to map movements from the human to the robotic arm, several solutions can be adopted. The MTx sensors, mounted on the Xsens suite, provide position and orientation, in the global frame of the motion capture system, of the segments of the human body on which they are positioned. For this reason, an immediate solution would be to map directly the position and the orientation of the human hand palm into the base frame of the robotic arm and afterwards to apply the CLIK algorithm, based on the robot kinematics, to reconstruct the arm configuration. In this way, the hand trajectory is accurately reproduced. On the other hand, due to kinematic differences between the human and the robotic arm and to the one degree of redundancy, the mapped motion is not human-like. To reproduce human-like motion, an alternative mapping method has been implemented. Two different CLIK algorithms have been used, taking respectively the elbow and wrist orientation as reference input. The first CLIK algorithm utilizes the kinematics of the first three joints of the robot corresponding to the spherical joint of the human shoulder. To compute the elbow orientation reference for the CLIK algorithm, the orientation matrix of the elbow [$${{\varvec{R}}}\_{e}$$], provided by MVN in the global frame, has been expressed with respect to the sternum frame to overcome changes due to sternum rotation: [$$\begin{aligned} {{\varvec{R}}}\_{e}^{s}={{\varvec{R}}}\_{z}^T(\alpha ) {{\varvec{R}}}\_{s}^T {{\varvec{R}}}\_{e}, \end{aligned}$$] (1) where [$${{\varvec{R}}}\_{s}$$] is the sternum rotation matrix expressed with respect to the global frame and [$${{\varvec{R}}}\_{z}(\alpha )$$] represents a rotation of [$$\alpha = \pi $$] about z axis, that is required to align the global reference frame of the MVN with the base frame of the KUKA LWR 4+ Arm. However, the initial configurations of the KUKA LWR 4+ Arm and of the human arm cannot be the same. Therefore, the mutual rotation matrix between the initial and the current frame of the human arm has been evaluated: [$$\begin{aligned} {{\varvec{R}}}\_{m\_e}={{\varvec{R}}}\_{e}^{s}{{\varvec{R}}}\_{i\_e}^T, \end{aligned}$$] (2) where [$${{\varvec{R}}}\_{i\_e}$$] represents the initial arm orientation expressed into the sternum reference frame. Finally, the desired CLIK reference has been obtained by pre-multiplying the initial KUKA LWR 4+ Arm elbow rotation matrix [$${{\varvec{R}}}\_{k\_e}$$], expressed in the robot base frame, by the mutual rotation matrix (2): [$$\begin{aligned} {{\varvec{R}}}\_{d\_e}={{\varvec{R}}}\_{m\_e} {{\varvec{R}}}\_{k\_e}, \end{aligned}$$] (3) The second CLIK algorithm is related to the last four joints of the elbow and wrist. In this case, the reference is constituted by the rotation matrix between the initial and the current human hand frame, reported in the elbow frame: [$$\begin{aligned} {{\varvec{R}}}\_{h}^{s}={{\varvec{R}}}\_{z}^T(\alpha ) {{\varvec{R}}}\_{s}^T {{\varvec{R}}}\_{h} \quad \quad \quad&{{\varvec{R}}}\_{m\_h}={{\varvec{R}}}\_{h}^{s} {{\varvec{R}}}\_{i\_h}^T \quad&{{\varvec{R}}}\_{d\_h}= {{\varvec{R}}}\_{d\_a}^T {{\varvec{R}}}\_{m\_h} {{\varvec{R}}}\_{k\_h}, \end{aligned}$$] (4) where [$${{\varvec{R}}}\_{h}$$] is the hand orientation matrix provided by MVN, [$${{\varvec{R}}}\_{i\_h}$$] represents the initial hand orientation expressed into the sternum reference frame and [$${{\varvec{R}}}\_{k\_h}$$] is the initial robotic hand rotation matrix expressed into the base frame of the robotic arm. Thus, the manipulator is seen as constituted by two kinematic chains with three and four DoFs. Since only the orientation is given as input to the two CLIK algorithms, the second kinematic chain has a redundant DoF. Thus, the angle between the arm and the forearm, computed using MVN measurements and geometric properties, has been mapped in the null space of the Jacobian matrix of the second kinematic chain in order to reproduce the forearm flexion/extension movements. About the arm, as preliminary study, we have chosen the Cartesian space for synergies computation since it has 6 DoFs despite the 7 DoFs of the configuration space. Moreover, the pose of the hand palm has a crucial role on the successful execution of the grasp. Nevertheless, in future works we reserve to make further evaluations on the convenience of choosing the Cartesian space rather than the configuration space for synergies representation. The target hand pose for a set of objects and grasps has been learned from demostration, by teleoperating the KUKA LWR 4+ Arm with the Xsens suite, as shown in Fig. 1. Let [$${{\varvec{p}}}\_h^{obj}$$] the hand position and [$${{\varvec{Q}}}\_h^{obj} = \{ \eta \_h, \varvec{\epsilon }\_{h} \}$$] the unit quaternion representation of the hand orientation with respect to the object rereference frame, the robotic hand pose can be represented with a vector [$${{\varvec{x}}}\in \mathrm {I\!R}{}^7$$]: [$$\begin{aligned} {{\varvec{x}}}= \begin{bmatrix} {{\varvec{p}}}\_h^{obj} \\ \eta \_h \\ \varvec{\epsilon }\_{h}\end{bmatrix}. \end{aligned}$$] (5) For synergies computation, two matrices have been built: the position matrix [$${{\varvec{P}}}= \{{{\varvec{p}}}\_{h\_i}^{obj} \ | \ i = 1,\dots ,38\}$$] and the matrix of the quaternions [$${{\varvec{E}}}= \{\varvec{\epsilon }\_{h\_i}^{obj} \ | \ i = 1,\dots ,38\}$$], where 38 is the number of the learned grasps from human demonstration. Then, the matrices [$${{\varvec{F}}}\_P = \{{{\varvec{p}}}\_{h\_i}^{obj} - \bar{{{\varvec{p}}}}\_{h}^{obj} \ | \ i = 1,\dots ,38\}$$] and [$${{\varvec{F}}}\_E = \{\varvec{\epsilon }\_{h\_i}^{obj} - \bar{\varvec{\epsilon }}\_{h}^{obj} \ | \ i = 1,\dots ,38\}$$] have been computed, where [$$\bar{{{\varvec{p}}}}\_{h}^{obj}$$] and [$$\bar{\varvec{\epsilon }}\_{h}^{obj}$$] are the mean vectors. The PCA has been performed on the matrices [$${{\varvec{F}}}\_P$$] and [$${{\varvec{F}}}\_E$$] and two bases of eigenvectors, [$${{\varvec{S}}}\_p \in \mathrm {I\!R}{}^{3\times 3}$$] and [$${{\varvec{S}}}\_{\epsilon } \in \mathrm {I\!R}{}^{3\times 3}$$], ordered in decreasing order of variance, have been found. By considering only the first principal component of the two bases, [$${{\varvec{e}}}\_p\in \mathrm {I\!R}{}^3$$] and [$${{\varvec{e}}}\_{\epsilon }\in \mathrm {I\!R}{}^3$$], the position and orientation of the hand can be found in the synergies subspaces, by specifying only two parameters, namely [$$\alpha \_{p\_i}$$] and [$$\alpha \_{\epsilon \_i}$$]: [$$\begin{aligned} {{\varvec{p}}}\_{h\_i}^{obj} = \bar{{{\varvec{p}}}}\_{h}^{obj} + {{\varvec{e}}}\_p \alpha \_{p\_i} \end{aligned}$$] (6) [$$\begin{aligned} \varvec{\epsilon }\_{h\_i}^{obj} = \bar{\varvec{\epsilon }}\_{h}^{obj} + {{\varvec{e}}}\_{\epsilon } \alpha \_{\epsilon \_i}, \end{aligned}$$] (7) while the scalar part of the quaternion can be found as follows: [$$\begin{aligned} \eta \_{h\_i} = \sqrt{\left( 1 - \left( \varvec{\epsilon }\_{h\_ix}^2 + \varvec{\epsilon }\_{h\_iy}^2 + \varvec{\epsilon }\_{h\_iz}^2 \right) \right) }. \end{aligned}$$] (8) [] Fig. 1. Snapshots of the experimental set-up during the telemanipulation control of the robot. 3 Supervised Learning for the Hand-arm System To confer autonomy to the grasping method, two Multilayer Neural Networks (MNNs) with the same architecture have been designed for the hand and for the arm. A multilayer feedforward neural network with nonlinear transfer function has been adopted thanks to the ability to learn any function with a finite number of discontinuities. In order to train the MNNs, a library of grasp examples is needed. The training set is constituted by eleven spherical objects, whose diameters are included in a range between 2.7 [cm] and 9.6 [cm], eleven cylindrical objects, with height between 16 [cm] and 25 [cm] and diameters included in a range between 1.2 [cm] and 7.5 [cm]. Since the input patterns must have the same dimension for all the objects, about the spheres a second parameter, namely the height, has been introduced and obviously it is chosen equal to the diameter. Finally, a third object category of parallelepiped-shaped objects has been considered. For this category a further input it is needed, i.e. the length, and it has been included in a range between 8.5 [cm] and 12 [cm], while, for both cylinders and spheres, this parameter has been set to zero. Furthermore, in order to identify the type of the grasp, an additional binary input has been introduced. [] Fig. 2. Schematic representation of the implemented neural network for the hand. The latter, for both cylinders and spheres, assumes unitary value for precision grasp and zero value for power grasp. Instead, for the parallelepiped-shaped objects this input assumes unitary value for a lateral grasp and zero value for the other cases. The use of synergies reduces the search space of the learning algorithm addressing a simplification in the neural network architecture design, especially regarding the number of hidden layers and the neurons in each of them. In the same way, a simplified representation of the hand pose reduces the output number hand MNN. Therefore, the networks receive as input four parameters: diameter, height and length of the object and the “grasp type input”. The outputs of the hand MNN are the three coefficients of the S5FH motor synergies coefficients that determine the fingers configuration, while the output of the arm MNN are the two coefficients of Cartesian space synergies that determine the hand palm pose relative to the target object. The MNNs have been implemented in Matlab using the Neural Network Toolbox (NN Toolbox). The network architecture has been experimentally chosen by changing the number of neurons and hidden layers, and in turn by analyzing the corresponding NN performance in terms of Mean-Squared Error (MSE). As a result of those experimental evaluations, the network model has been chosen as a feedforward NN with two hidden layer and ten sigmoid neurons for each layer. The complete scheme of the implemented NN is shown in Fig. 2, where the hand NN is represented. Furthermore, in order to improve the generalization, multiple neural networks have been trained and an average of their outputs has been considered for the experiments. Precisely, in this work fifty neural networks have been trained and their MSEs have been compared to the MSE of their average. The result of this comparison is reported in Fig. 3 for the NN of the hand and reveals a striking result, i.e. the average MSE is at least an order of magnitude less if compared with all the individual performance. Therefore, the use of multiple neural networks greatly improves the network generalization. In this way, it is possible to find the synergies coefficients corresponding to the object geometric features with higher accuracy. [] Fig. 3. Performance comparison between NN outputs and averaged output for the hand (left) and for the arm (right). 4 Demonstration of Synergy-Based Autonomous Grasping In this section, the experimental results obtained using the Multiple Neural Network method are reported. A Matlab ROS node, to connect the MNNs to a control ROS node, has been implemented using ROS Toolbox. The control node communicates with the SCHUNK S5FH control node and to the KUKA LWR 4+ Arm control node using a specific topic on which the synergies coefficients are published. Therefore, the outputs of the MNNs are used as a reference for the hand control node and for the KUKA LWR position control. The hand control allows the S5FH to reach the final grasp, starting from an open-hand configuration. The control of the hand is constituted by a kinematic algorithm that simply moves the hand in the synergies subspace toward the target. The control strategy uses the motor current measurements and introduces thresholds to avoid finger configurations that can cause high contact forces on the object. Once the feature and the pose of the object are known, the MNNs provides the control commands in terms of synergies coefficient of the desired hand configuration and hand palm pose. The arm control is a first-order kinematic control law based on the right pseudo-inverse of the geometric Jacobian matrix, since the unit quaternion has been used as end-effector orientation representation. The planned path is a linear segment in the operational space which connects the initial end-effector position [$${{\varvec{p}}}\_i$$] to the final position [$${{\varvec{p}}}\_f$$] learned with the MNN, whose parametric representation is the following: [$$\begin{aligned} {{\varvec{p}}}(s) = {{\varvec{p}}}\_i + \frac{s}{||{{\varvec{p}}}\_f - {{\varvec{p}}}\_i||}\left( {{\varvec{p}}}\_f - {{\varvec{p}}}\_i\right) , \end{aligned}$$] (9) where the time law [$$s\left( t \right) $$] is given by: [$$\begin{aligned} s \left( t \right) = a\_5 t^5 + a\_4 t^4 + a\_3 t^3 + a\_2 t^2 + a\_1 t + a\_0, \end{aligned}$$] (10) whose coefficients have been computed by imposing the conditions for [$$t = 0$$] and [$$t = t\_f$$] on the end-effector position and on its first two derivatoves. The end-effector orientation trajectory is given by: [$$\begin{aligned} {{\varvec{R}}}\_e \left( t \right) = {{\varvec{R}}}\_i {{\varvec{R}}}^i \left( t \right) , \end{aligned}$$] (11) where [$${{\varvec{R}}}\_i$$] is the initial end-effector orientation and [$${{\varvec{R}}}^i \left( t \right) $$] is the rotation matrix that describes the transition from [$${{\varvec{R}}}\_i$$] to [$${{\varvec{R}}}\_f$$]. The latter is the rotation matrix computed using the output of the MNN, i.e. the [$$\alpha \_{\epsilon \_i}$$] synergy coefficient related to the palm orientation, as well as (7) and (9). The hand palm trajectory for the orientation is expressed in terms of angle axis representation: [$$\begin{aligned} {{\varvec{R}}}^i \left( t \right) = {{\varvec{R}}}^i \left( \theta \left( t \right) , {{\varvec{r}}}^i \right) , \end{aligned}$$] (12) [] Fig. 4. Tripodal precision grasp. [] Fig. 5. Cylindrical power grasp. [] Fig. 6. Other examples of performed grasps. where [$${{\varvec{r}}}^i$$] is fixed and the timing law of [$$\theta (t)$$] is given by a fifth-order polynomial as in (10). The learning method has been tested for a total amount of 10 grasps, 5 cylinders and 5 spheres, including objects that the networks have never seen before, i.e not included in the training set. Both precision and power grasp have been tested for each object. The learned hand position and orientation is used as a reference for a simple arm control strategy described above. When the palm of the hand reaches the desired position the hand starts to move. While the arm control relies only on the reference output of the learning process, without adjustment of the planned position, the hand follows a different strategy. The planned desired configuration output of the MNN is further improved using a synergy-based control strategy described in [1]. The combination of an initial learned position with a control strategy in the synergy subspace allows overcoming planning errors due to the approximation of the relationship between the object features and the system configuration introduced by the hand and the arm MNNs. Moreover, the arm planning errors affect the hand synergies that can change even for the same object. This inconvenience is overcome by integrating learning and control in the synergies subspace. Some of the results are shown in Figs. 4, 5 and 6. 5 Conclusions and Future Work The experiments demonstrate that multiple neural networks are able to approximate with a high quality level the relationship between the synergies coefficients and the geometrical object features. Thus, MNN is a useful tool to plan grasps directly in the synergies subspace, with obvious advantages both from a computational and algorithmic point of view. Indeed, on the basis of object shape and size information, the synthesized synergies coefficients produce the desired grasp distinguishing among precision and power grasps, and also the number of fingers involved. Moreover, the use of combined synergy-based control and learning strategies for the hand allows overcoming planning errors due to uncertainties introduced by the learning process. As future experiments we plan to integrate this method to an object recognition algorithm using an RGB-D Vision sensor and to improve the coordination between the hand and the arm using human-like control strategies for the arm taking into account the motion of the hand. Moreover, we intend to explore different synergies subspaces for the arm, such as synergies of the configuration space as for the hand, and comparing the results. Acknowledgments This research has been partially funded by the EU Seventh Framework Programme (FP7) within RoDyMan project 320992. References 1. Ficuciello, F., Federico, A., Lippiello, V., Siciliano, B.: Synergies evaluation of the SCHUNK S5FH for grasping control. In: 15th International Symposium on Advances in Robot Kinematics (2016) 2. Feix, T., Pawlik, R., Schmiedmayer, H., Romero, J., Kragic, D.: The generation of a comprehensive grasp taxonomy. In: Workshop on Understanding the Human Hand for Advancing Robotic Manipulation, Robotics, Science and Systems, Washington DC (2009) 3. SCHUNK S5FH: schunk svh driver. http://​wiki.​ros.​org/​schunksvhdriver 4. Palli, G., Melchiorri, C., Vassura, G., Scarcia, U., Moriello, L., Berselli, G., Cavallo, A., Maria, G.D., Natale, C., Pirozzi, S., May, C., Ficuciello, F., Siciliano, B.: The DEXMART hand: mechatronic design and experimental evaluation of synergy-based control for human-like grasping. Int. J. Robot. Res. 33, 799–824 (2014)CrossRef 5. Ficuciello, F., Palli, G., Melchiorri, C., Siciliano, B.: Postural synergies of the UB hand IV for human-like grasping. Robot. Auton. Syst. 62, 357–362 (2014)CrossRef 6. Ficuciello, F., Palli, G., Melchiorri, C., Siciliano, B.: A model-based strategy for mapping human grasps to robotic hands using synergies. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Wollongong, Australia, pp. 1737–1742 (2013) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_13 Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction Peter Kaiser¹  , Eren E. Aksoy¹, Markus Grotz¹, Dimitrios Kanoulas², Nikos G. Tsagarakis² and Tamim Asfour¹ (1) Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT), Adenauerring 2, 76131 Karlsruhe, Germany (2) Department of Advanced Robotics, Istituto Italiano di Tecnologia (IIT), via Morego 30, 16163 Genova, Italy     Peter Kaiser Email: peter.kaiser@kit.edu Abstract The perception of affordances in unknown environments is an essential prerequisite for autonomous humanoid robots. In our previous work we developed a perceptual pipeline for the extraction of affordances for loco-manipulation actions based on a simplified representation of the environment starting from RGB-D camera images. The feasibility of this approach has been demonstrated in various examples in simulation as well as on real robotic platforms. The overall goal of the perceptual pipeline is to provide a robust and reliable perceptual mechanism for affordance-based action execution. In this work we evaluate the performance of the perceptual pipeline in combination with sensor systems other than RGB-D cameras, in order to utilize redundant sensor equipment of humanoid robots. This is particularly important when considering challenging scenarios where particular sensors are not applicable, e.g. due to intense sunlight or reflective surfaces. In this work we focus on stereo cameras and LIDAR laser scanners. Keywords AffordancesPerceptionLoco-manipulation 1 Introduction One of the main motivations behind the development of humanoid robots is the idea of creating a robotic system that is able to autonomously operate in unstructured, human-centered environments. Such robots require a rich perceptual basis for identifying possible ways of interaction with the environment. The theory of affordances, originally proposed by Gibson [1], provides a conceptual mechanism for explaining the human perceptual process. It states that action possibilities are proposed to an agent, for example a human or a humanoid robot, based on properties of relevant environmental objects and based on the agent’s capabilities. A chair for example affords sitting, but only to agents of sufficient height and capability. Overviews over applications of affordances in robotics can be found in [2, 3]. Many of the teams participating in the DARPA Robotics Challenge (DRC) Finals in 2015 pursued an affordance-driven approach to whole-body locomotion and manipulation. The perceptual process as well as the execution of actions were controlled by human operators via teleoperation in supervised autonomy. Examples for such pilot interfaces can be found in [4–6]. The teams participating in the DRC mostly used a combination of LIDAR sensors and stereo vision for range sensing. Promising results have also been generated solely using stereo camera systems [7]. LIDAR sensors are precise, but expensive time-of-flight laser scanners. Point clouds are obtained by aggregating line scans of the rotating sensor over time. Stereo camera systems are cheap, passive range sensors based on the identification of point correspondences in two camera images. Stereo camera systems are known to perform poorly with untextured objects. While affordances found many applications in the field of robotics, we specifically aim at the concept of whole-body affordances, i.e. affordances that refer to actions of whole-body locomotion or manipulation. This includes actions for whole-body stabilization, e.g. leaning against walls or holding handrails, or large-scale manipulation, e.g. pushing or lifting of large objects. Actions of whole-body locomotion and manipulation play an important role for the utilization of structures designed for the human body. In the next section we describe our previously proposed hierarchical formulation of affordances based on fundamental grasp affordances, which we regard as initial work towards the formulation of whole-body affordances. 2 Technical Approach The perceptual process employed in this work starts with creating a simplified representation of the captured scene. The acquired point clouds pass several pipeline steps until the scene is represented in terms of environmental primitives, i.e. planes, cylinders or spheres. In the first step we perform a part-based object segmentation method [8] which over-segments the scene in order to roughly separate groups of environmental primitives. We further employ geometric features for iteratively categorizing the resulting segments into environmental primitives. Figure 1 shows the structure of the perceptual pipeline from depth sensor information to the extraction of affordances. Figure 2 visualizes the intermediate steps of the perceptual pipeline. [] Fig. 1. The perceptual pipeline for affordance extraction [9–11]. The perceived scene is segmented into environmental primitives which form the basis for the extraction of affordances. The pipeline is implemented within the robot software framework ArmarX [12] and intertwined with its memory subsystem MemoryX. [] Fig. 2. The intermediate steps of the perceptual pipeline (see Fig. 1) for an exemplary scene containing a large box A, a stack of bricks B and a table C In [13] we proposed a hierarchical framework for the extraction of loco-manipulation affordances based on a scene represented with environmental primitives. The framework follows the idea that the majority of loco-manipulation actions break down to elementary power grasps at the lowest level. We particularly focus on platform grasps and prismatic grasps as we think that these two grasp types are predominant for the considered set of actions. However, the framework is not limited to these two elementary affordances. Affordances are represented as continuous certainty functions [$$\begin{aligned} \varTheta \_a : SE(3) \rightarrow [0,1], \end{aligned}$$] (1) which tell, how certain the perceptual system is about the existence of an affordance a for a given end-effector pose [$$\varvec{x}\in SE(3)$$].¹ Two mathematical operations are applied to form higher-level affordance certainty functions: - Affordance certainty functions can be multiplied in order to form combined affordance certainty functions. - Environmental properties can be converted into compatible certainty functions by applying sigmoid threshold functions. The procedure of affordance extraction is robot agnostic, taking elementary body-scaled parameters, e.g. end-effector dimensions, into consideration. Figure 3 shows the hierarchical process of affordance extraction based on the exemplary bimanual support affordance [$$\varTheta \_\text {Bi-Sp}(\varvec{x}\_1,\varvec{x}\_2)$$]. [] Fig. 3. Example of a bimanual affordance certainty function. The bimanual support affordance [$$\varTheta \_\text {Bi-Sp}(\varvec{x}\_1,\varvec{x}\_2)$$] consists of a bimanual platform grasp affordance [$$\varTheta \_\text {Bi-G-Pl}(\varvec{x}\_1,\varvec{x}\_2)$$] in combination with a horizontal orientation of the underlying primitive p. Horizontality is defined via a threshold applied to the orientation function [$$\text {up}(p)$$]. A bimanual platform grasp affordance consists of two unimanual platform grasp affordances, one for each end-effector pose, and a threshold applied to the distance [$$d(\varvec{x}\_1,\varvec{x}\_2)$$] between [$$\varvec{x}\_1$$] and [$$\varvec{x}\_2$$]. Figure 4 shows an exemplary sampling of the affordance certainty functions [$$\varTheta \_\text {G-Pl}$$] for platform grasping and [$$\varTheta \_\text {G-Pr}$$] for prismatic grasping, extracted from a perceived staircase. The example shows that the perceptual pipeline is able to successfully segment the perceived environment into elementary primitives and that it can subsequently compute reasonable certainty functions for various elementary affordances. It also shows that the perceptual pipeline is able to produce useful segmentations of real scenes captured with point cloud sensors. The displayed affordance certainty functions can directly be used as a basis for planning feet poses for stepping ([$$\varTheta \_\text {G-Pl}$$]) or hand locations for grasping the handrail ([$$\varTheta \_\text {G-Pr}$$]). Further results can be found in [9–11]. The system has been implemented and evaluated in experiments based on the humanoid platform ARMAR-III. One of the performed experiments demonstrates the perception of turnable objects in the context of a bimanual valve-turning task (see Fig. 5 and [13] for further details). [] Fig. 4. A visualization of affordance certainty functions for platform grasps [$$\varTheta \_\text {G-Pl}$$] (left) and prismatic grasps [$$\varTheta \_\text {G-Pr}$$] (right) extracted from a perceived staircase. The colors indicate the value of the respective certainty function ranging from red (highly uncertain) to green (very certain), while certainty values of zero were omitted in the visualization. The scene is segmented into environmental primitives, in this case planes, e.g. the ground plane (blue arrow), and cylinders, e.g. the handrail (orange arrow). [] Fig. 5. Top: The perceptual pipeline properly extracts bimanual affordances and proposes suitable end-effector poses (left) for the subsequent action execution (right) in a valve turning scenario. Bottom: A comparable experimental setup for the humanoid robot WALK-MAN. 3 Experiments The experiments carried out in [13] demonstrate the general feasibility of the proposed approach for loco-manipulation affordance extraction and the usefulness of the generated data for subsequent action execution. The perceptual pipeline has been designed and tested with RGB-D camera images, which provide a simple and cheap solution to range sensing. However, there are multiple approaches to visual perception for humanoid robots which promise to perform better in critical circumstances that real humanoid robots would have to face. Such circumstances could include outdoor scenarios with intense sunlight or malicious object materials, e.g. reflective surfaces. In the following we present initial evaluations of the perceptual pipeline with sensor systems other than RGB-D cameras. The experiments have been carried out in multiple scenarios with the perceptual system of the humanoid robot WALK-MAN [14].² 3.1 Evaluation Scenarios To evaluate the perceptual pipeline we captured a total of 129 stereo vision and 66 LIDAR point clouds. The point clouds resemble static snapshots of two evaluation scenarios [$$S\_1$$] and [$$S\_2$$] (see Fig. 6). For each scenario [$$S\_i$$], we defined multiple experiments [$$E\_{i,1},\dots ,E\_{i,k}$$] by changing the camera perspective or by slightly modifying the experimental setup. For each experiment [$$E\_{i,j}$$] we took a series of point clouds [$$P\_{i,j,1},\dots ,P\_{i,j,n}$$]. Although the captured scene was static during the experiments, the set of point clouds captured over time resembles noise of the underlying sensor system. In Fig. 6 we briefly describe the evaluation scenarios [$$S\_1$$] and [$$S\_2$$]. [] Fig. 6. The evaluation scenarios [$$S\_1$$] (left, A vertical wooden bar in front of the robot) and [$$S\_2$$] (right, A large box, a table and a stack of bricks). The perceptual pipeline requires a set of parameters to be specified, especially for the segmentation and primitive extraction stages. These parameters potentially need to be adjusted when changing the environmental setup. However, in the following evaluation we used the same parameter setup for all experiments [$$E\_{i,j}$$] from a scenario [$$S\_i$$]. 3.2 Evaluation Procedure Each point cloud [$$P\_{i,j,l}$$] is processed using the perceptual pipeline, extracting affordance certainty functions for the elementary power grasp affordances [$$\varTheta \_\text {G-Pl}$$] and [$$\varTheta \_\text {G-Pr}$$]. For each evaluation scenario [$$S\_i$$], we first manually create a ground truth set of environmental primitives and then compute the ground truth affordance certainty functions [$$\varTheta ^\*\_\text {G-Pl}$$] and [$$\varTheta ^\*\_\text {G-Pr}$$]. We then perform a binary comparison of the ground truth affordance certainty functions with the ones extracted from the experiment point cloud, applying a threshold of 0.5 to the certainty values of [$$\varTheta $$] and [$$\varTheta ^\*$$]. The spatial and orientational tolerances [$$\varDelta x$$] and [$$\varDelta \varphi $$] for proximity of end-effector poses have been set to 7.5 cm and [$$\frac{\pi }{4}$$] rad, respectively. The tolerances can be chosen generously at this point as the process of affordance extraction in general is understood as a source of high-level information on affordances and end-effector poses, prone to a certain degree of error. Handling these perceptual inaccuracies falls into the scope of affordance validation and action execution, as described in [11]. In order to evaluate the continuous nature of the certainty functions, we additionally define a similarity measure which is defined as the ratio of similar sampling points over the total number of ground truth sampling points. Two end-effector poses [$$\varvec{x}$$] and [$$\varvec{x}^\*$$] are considered similar if [$$|\varTheta (\varvec{x}) - \varTheta ^\*(\varvec{x}^\*)|< \varepsilon $$].³ 3.3 Results Table 1 shows the evaluation results for the scenarios [$$S\_1$$] and [$$S\_2$$] with respect to the affordance certainty functions [$$\varTheta \_\text {G-Pl}$$] and [$$\varTheta \_\text {G-Pr}$$]. For each experiment [$$E\_{i,j}$$] and for both available sensors, we list the number of point clouds processed (#), as well as the [$$F\_1$$] and similarity scores, comparing with the experiment’s ground truth. The ground truth stays the same for both evaluated sensors. Figure 7 displays mean and standard deviation of the precision and recall values from Table 1 for scenario [$$S\_2$$] and the affordance certainty function [$$\varTheta \_\text {G-Pr}$$]. The results show that the perceptual pipeline can successfully process point clouds originating from the considered sensors. However, stereo vision data performs significantly worse than LIDAR data, mainly because the depth information for more distant and less textured objects is less accurate. In many cases, especially for platform grasps in scenario [$$S\_2$$], the ground truth primitives were properly extracted, but significantly shifted when using stereo vision input. Referring to Fig. 4, platform grasp affordances usually form two-dimensional manifolds in the space of end-effector positions, whereas prismatic grasp affordances form one-dimensional manifolds. This makes it harder for the perceptual pipeline to properly extract prismatic grasp affordances within the applied tolerances. This is the main reason why the [$$F\_1$$] scores are significantly worse for [$$\varTheta \_\text {G-Pr}$$] than for [$$\varTheta \_\text {G-Pl}$$], for both sensors likewise. In many cases, possibly due to outliers in the point clouds, the extracted primitives are larger than the ground truth primitives, but properly oriented and shaped. Such circumstances result in failures when comparing with the ground truth, but resulting affordances might still be of reasonable use for action planning, when employing appropriate control mechanisms. Table 2 displays the runtimes of the primitive extraction and the affordance extraction steps of the perceptual pipeline for two selected experiments both, for LIDAR and for stereo vision point clouds. The runtimes have been generated on a standard Core i7 desktop PC. Note that the perceptual pipeline has not been optimized for runtime efficiency. Table 1. Comparison of the affordance certainty functions [$$\varTheta \_\text {G-Pl}$$] and [$$\varTheta \_\text {G-Pr}$$] produced by the perceptual pipeline based on different sensors in [$$S\_1$$] and [$$S\_2$$]. +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ |   | Scenario | Exp. | LIDAR | | | Stereo vision | | | +:========================================+:===================+:==============+:======+:==========+:=====+:==============+:==========+:=====+ | | | | # | [$$F\_1$$] | Sim. | # | [$$F\_1$$] | Sim. | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | [$$\varvec{\varTheta \_\text {G-Pl}}$$] | [$$\varvec{S\_1}$$] | [$$E\_{1,1}$$] | 8 | 0.93 | 0.84 | 20 | 0.85 | 0.69 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{1,2}$$] | 2 | 0.92 | 0.84 | 5 | 0.76 | 0.56 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | [$$\varvec{S\_2}$$] | [$$E\_{2,1}$$] | 9 | 0.92 | 0.85 | 23 | 0.79 | 0.63 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,2}$$] | 9 | 0.92 | 0.84 | 17 | 0.50 | 0.35 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,3}$$] | 10 | 0.90 | 0.81 | 18 | 0.62 | 0.45 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,4}$$] | 9 | 0.90 | 0.81 | 13 | 0.56 | 0.39 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,5}$$] | 10 | 0.93 | 0.85 | 13 | 0.53 | 0.35 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,6}$$] | 9 | 0.91 | 0.82 | 20 | 0.74 | 0.58 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | [$$ \varvec{\varTheta }\_\text {G-Pr}$$] | [$$\varvec{S\_1}$$] | [$$E\_{1,1}$$] | 8 | 0.59 | 0.98 | 20 | 0.15 | 0.82 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{1,2}$$] | 2 | 0.82 | 0.98 | 5 | 0.19 | 0.78 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | [$$\varvec{S\_2}$$] | [$$E\_{2,1}$$] | 9 | 0.80 | 0.98 | 23 | 0.72 | 0.87 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,2}$$] | 9 | 0.70 | 0.97 | 17 | 0.41 | 0.67 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,3}$$] | 10 | 0.72 | 0.97 | 18 | 0.36 | 0.82 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,4}$$] | 9 | 0.69 | 0.97 | 13 | 0.43 | 0.66 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,5}$$] | 10 | 0.70 | 0.97 | 13 | 0.38 | 0.70 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ | | | [$$E\_{2,6}$$] | 9 | 0.72 | 0.97 | 20 | 0.51 | 0.75 | +-----------------------------------------+--------------------+---------------+-------+-----------+------+---------------+-----------+------+ [] Fig. 7. A comparison of average precision and recall and their standard deviations in the experiments [$$E\_{2,1},\dots ,E\_{2,6}$$] of scenario [$$S\_2$$] (see Table 1). Table 2. Average point clouds sizes (number of points) and runtimes of different steps of the perceptual pipeline. +-----------------------------------------------+-------+-------------+------------+---------------+-------------+------------+ | Experiment | LIDAR | | | Stereo vision | | | +:==============================================+:======+:============+:===========+:==============+:============+:===========+ | | Size | Prim. Extr. | Aff. Extr. | Size | Prim. Extr. | Aff. Extr. | +-----------------------------------------------+-------+-------------+------------+---------------+-------------+------------+ | [$$E\_{2,1}$$] ([$$\varTheta \_\text {G-Pl}$$]) | 117 K | 7.2 s | 67 ms | 569K | 19.3 s | 109 ms | +-----------------------------------------------+-------+-------------+------------+---------------+-------------+------------+ | [$$E\_{2,1}$$] ([$$\varTheta \_\text {G-Pr}$$]) | 117 K | 6.7 s | 50 ms | 569K | 19.0 s | 87 ms | +-----------------------------------------------+-------+-------------+------------+---------------+-------------+------------+ 4 Conclusion In our previous work, we proposed a perceptual pipeline for the extraction of affordance certainty functions from environments perceived with an RGB-D camera, which has proven to produce reasonable and useful results in multiple experiments. In this work we defined an evaluation procedure for the perceptual pipeline based on ground truth primitive sets and evaluated the performance in affordance extraction with point clouds obtained from sensor systems other than RGB-D cameras. In particular we used the sensor equipment of the humanoid robot WALK-MAN, i.e. the laser scanner and the stereo camera system of the MultiSense SL sensor head. By extending the range of sensor systems applicable with the perceptual pipeline, we aim at exploiting the full capabilities of robots with redundant sensor systems. This is a crucial capability for a perceptual system when operating in unknown environments that can happen to be particularly unfortunate for one of the implemented sensors. The results show that the perceptual pipeline can handle LIDAR and stereo vision point clouds. However, as expected, it performs significantly better with the more precise LIDAR scans. The stereo vision point clouds examined have been more dense than the LIDAR data, resulting in a much higher computation time. Although the results do not seem to justify a need for this density, it is expected to perform better in smaller-scale environments. Based on the result, we conclude that the exploitation of redundant sensor systems is possible using our previously proposed methods on affordance extraction. It would be promising to develop autonomous or semi-autonomous capabilities for detecting environmental circumstances that demand a specific sensor to be used. The extraction of affordances based on the fusion of point clouds from different sensors would also be a certain improvement. Acknowledgments The research leading to these results has received funding from the European Union Seventh Framework Programme under grant agreement no. 611832 (WALK-MAN). References 1. Gibson, J.J.: The Ecological Approach to Visual Perception (1978) 2. Şahin, E., Çakmak, M., Doǧar, M.R., Uǧur, E., Üçoluk, G.: To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adapt. Behav. 15, 447 (2007)CrossRef 3. Krüger, N., Geib, C., Piater, J., Petrick, R., Steedman, M., Wörgötter, F., Ude, A., Asfour, T., Kraft, D., Omrčen, D., Agostini, A., Dillmann, R.: Object-action complexes: grounded abstractions of sensorimotor processes. Robot. Auton. Syst. 59(10), 740–757 (2011)CrossRef 4. Romay, A., Kohlbrecher, S., Conner, D.C., von Stryk, O.: Achieving versatile manipulation tasks with unknown objects by supervised humanoid robots based on object templates. In: IEEE-RAS International Conference on Humanoid Robots, pp. 249–255 (2015) 5. Fallon, M., Kuindersma, S., Karumanchi, S., Antone, M., Schneider, T., Dai, H., Pérez, C., D’Arpino, R., Deits, M., DiCicco, D., Fourie, T., Koolen, P., Marion, M., Posa, A., Valenzuela, K.-T., Yu, J., Shah, K., Iagnemma, R., Tedrake, R., Teller, S.: An architecture for online affordance-based perception and whole-body planning. J. Field Rob. 32(2), 229–254 (2015)CrossRef 6. Hart, S., Dinh, P., Hambuchen, K.: The affordance template ROS package for robot task programming. In: IEEE International Conference on Robotics and Automation, pp. 6227–6234 (2015) 7. Fallon, M.F., Marion, P., Deits, R., Whelan, T., Antone, M., McDonald, J., Tedrake, R.: Continuous humanoid locomotion over uneven terrain using stereo fusion. In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 881–888 (2015) 8. Stein, S.C., Schoeler, M., Papon, J., Wörgötter, F.: Object partitioning using local convexity. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 304–311 (2014) 9. Kaiser, P., Gonzalez-Aguirre, D., Schültje, F., Borràs, J., Vahrenkamp, N., Asfour, T.: Extracting whole-body affordances from multimodal exploration. In: IEEE-RAS International Conference on Humanoid Robots (Humanoids), pp. 1036–1043 (2014) 10. Kaiser, P., Vahrenkamp, N., Schültje, F., Borràs, J., Asfour, T.: Extraction of whole-body affordances for loco-manipulation tasks. Int. J. Human. Rob. (IJHR) 12(3), 155031 (2015) 11. Kaiser, P., Grotz, M., Aksoy, E.E., Do, M., Vahrenkamp, N., Asfour, T.: Validation of whole-body loco-manipulation affordances for pushability and liftability. In: IEEE/RAS International Conference on Humanoid Robots (Humanoids) (2015) 12. Vahrenkamp, N., Wächter, M., Kröhnert, M., Welke, K., Asfour, T.: The robot software framework armarx. Inf. Technol. 57(2), 99–111 (2015) 13. Kaiser, P., Aksoy, E.E., Grotz, M., Asfour, T.: Towards a hierarchy of loco-manipulation affordances. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2016) 14. Tsagarakis, N.G., Caldwell, D.G., Bicchi, A., Negrello, F., Garabini, M., Choi, W., Baccelliere, L., Loc, V., Noorden, J., Catalano, M., Ferrati, M., Muratore, L., Margan, A., Natale, L., Mingo, E., Dallali, H., Malzahn, J., Settimi, A., Rocchi, A., Varricchio, V., Pallottino, L., Pavan, C., Ajoudani, A., Lee, J., Kryczka, P., Kanoulas, D.: WALK-MAN: a high performance humanoid platform for realistic environments. J. Field Rob. (JFR) (2016) Footnotes 1 SE(3) denotes the special Euclidean group.   2 WALK-MAN is equipped with a MultiSense SL sensor head from Carnegie Robotics containing a LIDAR sensor and a stereo camera system. The LIDAR scanner captures 1024 points per scan and was configured to rotate with 0.5 rad/sec. The stereo camera system produces point clouds using semi-global matching based on 1 Mpx camera images. No postprocessing filters have been applied in both cases.   3 In our experiments, we chose [$$\varepsilon = 0.1$$].   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_14 Core Actuation Promotes Self-manipulability on a Direct-Drive Quadrupedal Robot Jeffrey Duperret¹  , Benjamin Kramer¹ and Daniel E. Koditschek¹ (1) Department of Electrical and Systems Engineering, University of Pennsylvania, 200 South 33rd Street, Philadelphia, Pennsylvania 19104, USA     Jeffrey Duperret Email: jdup@seas.upenn.edu Abstract For direct-drive legged robots operating in unstructured environments, workspace volume and force generation are competing, scarce resources. In this paper we demonstrate that introducing geared core actuation (i.e., proximal to rather than distal from the mass center) increases workspace volume and can provide a disproportionate amount of work-producing-force to the mass center without affecting leg linkage transparency. These effects are analytically quantifiable up to modest assumptions, and are demonstrated empirically on a spined quadruped performing a leap both on level ground and from an isolated foothold (an archetypal feature of unstructured terrain). 1 Introduction The ability of legged robots to move through unstructured environments is critical to their practical utility in first-response and search-and-rescue operations. While recent work in the field has yielded a growing array of impressive legged platforms capable of steady-state dynamic behaviors over flat or modest terrain [1–4], there has been less focus on designing systems for locomotion in highly irregular environments. In particular, there has been relatively little experimental investigation into the locomotion prowess of non-conventional legged robot morphologies (departing from the traditional rigid-body-with-appendages framework) such as core actuation that are commonly seen in animals operating in unstructured terrain. Throughout this paper, the terms core and spine actuation should be taken to mean the introduction of actuated degrees of freedom proximal to rather than distal from the mass center. Prior robotic literature on core actuation has largely focused on steady-state running gaits. Simulation studies of reduced-order models suggest that core actuation and compliance can provide increased speed, stability, and apex height while running [5–7]. Self-stabilizing gaits and decreased energetic cost of transport have been found with purely passive core compliance [8]. Experimental work involving core actuation has been more limited. The design of power-autonomous quadrupeds utilizing parallel stiffness is presented in [9, 10]. Other experiments have suggested increased running speed [11] and gait transition stability [12]. Leaping from a crouched position with a parallel elastic-actuated spine was shown to increase leap energy in [13]. However, much more work is needed to provide designers with the quantifiable utility of core actuation or lack thereof, especially for the less-studied application of traversing irregular terrain. This paper demonstrates two advantages of introducing core actuation to a direct-drive quadrupedal robot and experimentally quantifies the utility of this morphological design choice. Specifically, we demonstrate that—in inherently torque-limited, direct-drive machines [3, 14] where workspace volume and force generation are competing, scarce resources for operation in unstructured environments—the addition of core actuation increases workspace volume without adversely affecting leg force generation. Additionally, using gearing in the spine allows an individual motor to provide the mass center a significantly higher amount of sustained, work-producing force than it can on a transmission-free leg, without affecting leg linkage transparency. We quantify both of these advantages and demonstrate them on a physical platform in a series of experiments involving leaping from level ground and an isolated foothold (an archetypal feature of unstructured terrain), a task requiring both large ground forces as well as a large workspace to facilitate balancing and self-manipulation [15]. 2 Technical Approach: The Utility of Core Actuation This section describes potential workspace volume and force production advantages of adding to a conventional sagittal-plane quadrupedal machine an actuated, revolute degree-of-freedom joint proximal to the mass center. Simplified models of quadrupedal platforms, such as the one depicted in Fig. 1(a), often take the form of three-degree-of-freedom rigid bodies with common distances between the hips and mass center [16, 17] and assume massless legs able to apply wrenches on the mass center when in contact with the ground subject to friction cone constraints. Following [6–9], we add core actuation to this model by introducing an actuated revolute joint to the body, depicted in Fig. 1(b) (note that alternative formulations exist, such as [5]). Here we make the simplifying assumption that the parameters of each body segment are equal and that the mass center of each body segment is aligned with the leg hip, as is approximately true for the machine presented in Sect. 3. This model is used to analyze the experimental results given in Sect. 3.2. We further limit our discussion to the class of direct-drive quadrupedal robots whose legs are actuated by DC electric motors (as exemplified by [3] whose drive-train technology and principles of design are roughly adopted in the hips of our new, additionally spined robot to be introduced in the next section). These robots sacrifice potential motor output shaft torque for high actuator and leg linkage transparency, allowing motors to sense environmental forces and events like ground contact. Given such robots’ inherent torque limitations, it is desirable for the limb kinematics to produce high forces for given motor torques. Increasing force generation by decreasing lever arm length, however, trades away workspace size. Larger workspaces are highly beneficial in unstructured environments; they afford better access to intermittent footholds and improved body self-manipulation over a wider range of postures. A small workspace runs the risk of the robot becoming high-centered and losing balance on smaller footholds. In Appendix 1, we make explicit this trade-off confronting the designer when considering a simple scaling of a nominal leg linkage design. Core actuation allows the body to move the leg hip with respect to the mass center, thereby augmenting the leg workspace volume. Consider the simplified case of an annulus leg workspace with inner radius [$$r\_1$$] and outer radius [$$r\_2$$]. The volume of the workspace is given by [$$V= \pi (r\_2^2 - r\_1^2)$$]. Assuming core bending can translate the center of this annulus a distance [$$\bar{d}$$] with respect to the mass center and that [$$\bar{d}\ge r\_1$$], the augmented workspace provided by core actuation is [$$\bar{V}= \pi r\_2^2 + 2 r\_2\bar{d}$$], a volume increase of [$$2 r\_2\bar{d}+ \pi r\_1^2$$]. This is depicted in teal in Fig. 1. The utility of this added volume is empirically demonstrated on a spined robotic quadruped perching and leaping from an isolated foothold in Sect. 3. [] Fig. 1. (a) Simplified sagittal-plane three-degree-of-freedom model of a rigid body quadrupedal platform with massless legs. (b) Simplified sagittal-plane four-degree-of-freedom model of a spined quadrupedal platform possessing an actuated revolute joint proximal to the mass center. The models are parametrized by their mass, moment of inertia, and body segment length as shown in green. The state of the models is represented by the position and velocity of the mass center and the body segment pitch and angular velocity as shown in blue – for the spined model we choose to use the average body pitch and the difference between the pitch of the front and rear. Spine bending augments the nominal leg workspace (depicted in teal for a nominal annulus leg workspace) and provides an additional source of actuation to do useful work on the mass center. The core can be geared without affecting the direct-drive leg transparency. Geared core actuation allows otherwise direct-drive machines to augment their inherently limited ability to exert large sustained forces on the environment. Since the gearing is proximal rather than distal to the mass center, it does not diminish the leg linkages transparency that allow sensing of environmental forces. For sufficiently high-powered operation, however, core actuation requires the legs to operate in a non-transparent region of their workspace as large forces generated on the ground by the core must be transmitted through the torque-limited legs to be usefully applied to the mass center. Explicitly, if [$$F\_{\text {core}}$$] is the force generated by the core on a point contact with the ground through a static leg linkage, the leg motors must apply the torque [$$Df^{T}(q)F\_{\text {core}}$$], where Df(q) is the Jacobian of the leg’s forward kinematic chain and q are the joint positions. For sufficiently large force magnitudes this necessitates operating the legs near singularity, where a small singular value of Df(q) magnifies a component of the limited motor torque so the force generated at the core can transmit through the toe. This is a low-transparency regime of operation for the leg because external forces transferred to the motor are diminished along the direction corresponding to the small singular value of the linkage Jacobian. The robot, then, is able to operate in real-time in the continuum between two modes of operation: a low-force, high-transparency mode where the motors are capable of high-bandwidth environmental sensing, and a high-force, low-transparency mode where the geared core is able to perform significant work on the mass center. We note that – although its quantification is outside the scope of the present work – the added revolute joint will increase platform mass and complexity on a physical machine. This added mass and complexity incurred should be weighed against the aforementioned advantages when considering a spined morphology. [] Fig. 2. The robot used in the experiments has a parallel elastic-actuated spine. A version of the robot with longer legs (left) is compared with a version with shorter legs (right) in leaping from an isolated foothold for the purpose of evaluating task sensitivity to workspace size. The ratio between the lengths of the distal and proximal link (shown in Fig. 3) was chosen from numerical study to approximately maximize vertical leaping height over a range of scaling factors. The scaling factor of the distal and proximal links for the shorter legs was chosen near the smallest that allowed for balancing on the isolated foothold (specifically, to yield a minimal but non-empty intersection of the front and rear leg workspaces without core bending, allowing both legs to “grasp” the same point), and for the longer legs was chosen to be 1.5 times the shorter legs—a large enough increase to reasonably expect a significant performance difference compared to the shorter legs while keeping the extended leg length less than the hip-to-hip length as we were wary of avoiding excessive pitching when accelerating from rest caused by long legs [18]. 3 Experiments and Results This section introduces a robotic quadrupedal platform utilizing core actuation and quantifies advantages provided by the core in leaping experiments. 3.1 Experimental Setup The robot used to perform the experiments is shown in Fig. 2. It consists of a front and rear body segment connected by a parallel elastic-actuated spine. The legs are 2-degree-of-freedom 5-bar linkages driven by 2 direct-drive TMotor U8-16¹ brushless DC motors as shown in Fig. 3 and using drive electronics derived largely from those detailed in [19]. The spine is a minor variation on the design documented in [10], differing from its predecessor by using a belt drive instead of cable drive and by using different motors. The spine’s belt drive, running across the length of the spine, is actuated by a pair of coaxially-mounted U8-16 motors in parallel configuration housed in the rear body segment. A sprocket in the rear body segment connects these motors to the belt and accounts for the spine gearing. A fiberglass plate provides parallel compliance and constrains the bending motion. As this work does not focus on the energetic contribution of this parallel compliance, a thin fiberglass plate storing minimal elastic energy was used. The spine motors pull on the belt against a fixed sprocket on the front of the robot, causing upward or downward spine bending. Vertebrae affixed around the fiberglass plate act as guides for the belt (as introduced in [10]), and spring-loaded tensioners compensate for loss of tension during bending. Control is performed on-board the robot, using an STM32F3² microcontroller to command the 10 motors through custom motor controllers capable of providing up to 43 A of current per motor. Power is provided by an on-board 3 S lithium polymer battery. The only sensors aside from magnetic encoders on the motor shafts are 2 InvenSense MPU6000³ IMUs that are used to estimate the orientation of the front and rear body segments. The position and orientation of the front and rear body segments were tracked during the experiments using a 22-camera Qualisys⁴ motion capture system. This data was fit to the reduced-order, sagittal plane model depicted in Fig. 1 to ascertain the mass center trajectory and body energy. [] Fig. 3. The leg kinematics (left) are shown for 2 different sets of linkage lengths used in the experiments. The longer legs have a larger workspace (middle) while the shorter legs are able to generate higher forces for a fixed motor torque (right), as indicated by the smaller average of the squared singular values of the forward kinematic Jacobian for given motor shaft angles [$$\phi \_1, \phi \_2$$], or equivalently, thermal cost of force [14, p. 48] for a normalized motor constant. The depicted workspace and singular value results assume an end effector located where the links d3 are connected. To physically demonstrate the advantages of the spine, two leaping experiments were performed. In the first, the robot executed leaps off of a 20 cm-tall, 9 cm-wide perch as depicted in Fig. 4 to illustrate the sensitivity to workspace size when operating on isolated footholds. These leaps were performed with the longer legs shown in Fig. 3 without spine bending to illustrate task performance without workspace constraints, and with shorter legs to introduce workspace constraints. The minimum and maximum longer leg lengths are 0.141 m and 0.447 m, respectively, while the minimum and maximum shorter leg lengths are 0.087 m and 0.296 m, respectively. However, due to the complicated geometry of the workspace volume these lengths are obtainable only when the toe is vertically aligned with the hip. Spine bending was then used with the shorter legs to evaluate if the workspace benefit provided by the spine yielded a significant performance advantage. Each leaping configuration (long legs without spine bending, short legs without spine bending, and short legs with spine bending) was run 6 times using the feed-forward control strategy detailed below. In the second experiment, the robot leapt forwards from level ground using the shorter leg configuration with and without spine bending to quantify the sustained forces generated by the spine. Each experiment was conducted 6 times. In each trial, the feed-forward leaping strategy consisted of forcefully extending the front and rear legs. The magnitude of the vector of motor torques generated by each leg module was maximized with respect to the sup-norm torque-limit constraint imposed by the power electronics. The direction of this vector was chosen so that the ground reaction force vector created at the toe was approximately 45 [$$^\circ $$] from vertical. A modification to this strategy granting better performance was used on level ground, where the rear legs pushed directly backwards at 0 [$$^\circ $$] with respect to horizontal while the front extended, then switched to the nominal 45 [$$^\circ $$] ground reaction force vector, and then to a nearly vertical force vector as they neared the end of their extension⁵. When spine bending was used, the spine motors applied their maximum torque to extend the spine after a short time delay to allow the rear legs to acquire good traction and for the front to extend. If spine bending was not used, the spine motors worked to keep the spine in a straight configuration during the duration of the leap. [] Fig. 4. Leaping off a 9 cm-wide isolated foothold succeeded without core bending using longer legs (top-left, bottom blue), failed without core bending using shorter legs (top-center, bottom red), and succeeded with core bending using shorter legs (top-right, bottom green). These qualitative results—further described in Sect. 3.2—suggest that core bending provides a benefit to the robot’s kinematic workspace, allowing a successful leap using shorter legs than would be possible without core bending. 3.2 Experimental Results Perch Leaping Results. Balancing on and leaping from a 20 cm-tall, 9 cm-wide foothold was successful using the longer legs of Fig. 3 without core bending. With shorter legs and without core bending, the robot balanced on the foothold despite all four legs being near the edge of their workspace, but attempts at leaping failed. Specifically, the front legs were unable to push backwards during the leap, and any forward motion of the body moved the foothold out of the front legs’ workspace. The result was that the robot cantilevered on the back legs and pitched downwards, causing the front body segment to impact the ground. On the other hand, with shorter legs and with core bending the robot successfully leaped, aided by the increased workspace volume provided by the spine bending. The mass center trajectories during the leaps are plotted in Fig. 4. The robot achieved an average horizontal leap distance of 0.80 m using the long legs without the spine and 0.59 m using the short legs with the spine. We attribute this difference to several contributing factors. First, the longer legs provide a larger kinematic extension than the shorter legs, which directly increases the distance they push the mass center. Second, the analysis in Sect. 3.3 indicates that the spine successfully augments the workspace but the longer legs still provide a greater contribution to accomplish the workspace-sensitive task. Finally, we still do not fully understand how to apply the entire energetic contribution of the spine to the mass center using hand-tuned leaps from level ground as discussed in Sect. 3.3—a difficulty that is only compounded when leaping from the perch. [] Fig. 5. Leaping from the ground with and without spine bending using an otherwise identical feed-forward control scheme shows that the spine motors add on average 5.7 J to the body energy [13] (discounting the 0.5 J stored in initial spine elastic potential energy). The body energy added is calculated by subtracting the energy at the leap height apex—indicated by a vertical tick in the sample energetic traces shown in the right figure—from the starting energy. These results show the spine motors add a disproportionate amount of work ([$$36\,\%$$] more) during the leap on a per-motor basis as compared with the leg motors due to their gearing. Ground Leaping Results. Leaping was successful on flat ground using the shorter leg configuration both with and without spine bending as shown by the energetic results in Fig. 5. These energetic results were calculated from the extrinsic body energy of the robot, the sum of the mass center’s kinetic and gravitational potential energy relative to a simplified Lagrangian model, as documented in [13]. Leaping aided by spine bending added an average of [$$22.8 \pm 0.5$$] J to the body (an average of 22.3 J when discounting the elastic potential energy separately measured to be stored in the spine’s fiberglass plate bending) and leaping with an identical strategy but without bending the spine added an average of [$$16.6 \pm 0.7$$] J to the body, 6.2 J less than with spine bending. After discounting the elastic potential energy stored in the spine, we attribute the [$$34\,\%$$] increase in energy when using the spine to the spine motors, since they are the only other source of work available. 3.3 Experimental Insights into Core Actuation Utility Core Workspace Benefit. The results of the experiments in leaping from a small isolated foothold in Sect. 3.2 qualitatively indicate that the core is able to increase the legs’ workspace with respect to the mass center to accomplish a useful task. This benefit allows for the leap to be completed using shorter legs capable of generating higher forces—as indicated by the singular values of Fig. 3—than if no spine was used. Analytically quantifying the increased workspace conferred by the spine is confounded by the complex workspace geometry of the legs and their lack of rotational symmetry. We can estimate this increase, however, by making the approximation [$$d\_1 = 0$$] for the leg kinematics shown in Fig. 3 such that the linkage becomes the annulus analyzed in Sect. 2 with an inner radius [$$r\_1 = d\_3 - d\_2$$] and outer radius [$$r\_2 = d\_3 + d\_2$$]. Under this approximation, the longer leg linkage represents a scaling of the shorter leg linkage by a scaling factor of 1.5. The spine can move one hip a distance [$$\bar{d} = 10\,cm$$] with respect to the center of mass, satisfying [$$\bar{d} \ge r\_1$$] for the shorter legs. Thus, using the results of Sect. 2, the volume for the shorter-legs-without-spine configuration is [$$0.25m^2$$], for the shorter-legs-plus-spine configuration is [$$0.34m^2$$], and for the longer legs is [$$0.57m^2$$]. The perching experiments show that—while the volumetric benefit provided by the spine is not greater than that provided by the longer legs—this approximately [$$36\,\%$$] increase in workspace volume provided by the core allows successful self-manipulation on the perch. Geared Core Work Production. The [$$34\,\%$$] increase in body energy provided by the spine motors during the ground leaping experiments show that the spine motors add a disproportionate amount of work during the leap on a per-motor basis as compared with the leg motors. By commanding the spine motors to do useful work during the leap, the number of work-producing motors increased by [$$25\,\%$$] from 8 to 10. If the spine motors had the same energetic effect as an average leg motor, then one could reasonably assume a [$$25\,\%$$] increase in leaping body energy by using the spine.⁶ Instead, by increasing the body energy by [$$34\,\%$$], each spine motor did [$$36\,\%$$] more work on the mass center than the average leg motor did. This is made possible by the spine gearing which allows the spine motors to rotate through a much greater angular displacement than the leg motors ([$$2.6 \pi $$] radians in the spine versus an average less than [$$\pi $$] radians in the legs) while maintaining a similar torque.⁷ Under ideal conditions, the spine could likely perform much better. Theoretically, if the leg motors were used to their full potential at their low-speed torque-limited regime of operation they would each do [$$\pi \tau $$] Joules of work in a leap or stride, assuming operation at a torque limit of [$$\tau $$] over an angular displacement of [$$\pi $$] radians. With 8 leg motors used on a quadrupedal machine this gives [$$8 \pi \tau $$] Joules of available work. Adding 2 spine motors capable of a conservative angular displacement of [$$2.5 \pi $$] radians in the same low-speed torque-limited regime of operation would then increase the total maximum available work in a leap⁸ to [$$13 \pi \tau $$], a [$$62.5\,\%$$] increase in body energy in which each spine motor does 2.5 times more work on the mass center than a leg motor. Our spine experiments saw only slightly more than half of this theoretical increase in body energy, indicating that further efforts toward improving the leaping controllers will be required in order to fully exploit the potential energetic benefits of core actuation. 4 Conclusions and Future Work In direct drive machines, the addition of core actuation increases workspace volume and—with gearing—can allow the spine motors to do a disproportionately large amount of work on the mass center as compared with leg motors without negatively affecting the leg linkage transparency. These effects are analytically quantifiable up to modest assumptions and approximations, and were demonstrated empirically on a spined quadruped performing a leap both on level ground and from an isolated foothold. These results indicate that core actuation can provide designers with specific advantages (if worth the increased mass and complexity) for inherently torque-limited, direct-drive machines where workspace volume and force generation are competing scarce resources for operation in unstructured environments. Improved balance and leap performance on isolated footholds is just one example of many possible uses of core actuation in unstructured terrain. Future work now in progress seeks an experimentally-validated, reduced-order model of quadrupedal core actuation applicable to both steady-state and transitional tasks that we hope will be a first step towards quantifying and promoting new sharper hypotheses concerning the potential utility of core actuation in robotic legged locomotion. Acknowledgments This work is supported by the National Science Foundation under both the Graduate Research Fellowship Grant No. DGE-0822 and CDI-II CABiR (CDI 1028237), as well as by the Army Research Laboratory under Cooperative Agreement Number W911NF-10-2-0016. Appendix 1: Analytic Leg Force Generation Versus Workspace Volume Trade-off via Linkage Scaling We explicitly show the trade-off between leg force generation and workspace volume confronting the designer by considering a simple scaling of a nominal leg linkage design by a scaling factor [$$\lambda $$], assuming a fully actuated leg interacting with the ground through a point contact. Let the forward kinematic map of the nominal leg linkage with a point toe and origin at the hip be given by [$$f: Q\rightarrow \mathbb {R}^n$$], where [$$q\in Q$$] denotes the actuated joint positions. Consider a uniform scaling transformation applied to this linkage, scaling the length of all vectors by a factor of [$$\lambda \in \mathbb {R}^{+}$$], and let [$$f\_{\lambda }(q) :=\lambda f(q)$$] denote the forward kinematic map of the scaled linkage. The nominal leg linkage has a workspace volume given by [$$V:=\int \_{f(Q)} \varOmega $$], where [$$\varOmega $$] indicates the standard volume form on [$$\mathbb {R}^n$$] [20]. The forces [$$F$$] generated at the toe from motor torques [$$\tau $$] is then given by [$$F(q) :=Df^{-T}(q) \tau $$] assuming the leg linkage is not at singularity, where [$$Df:=\frac{\partial f}{\partial q}$$]. Denoting the workspace volume of the scaled linkage by [$$V\_{\lambda }:=\int \_{f\_{\lambda }(Q)} \varOmega $$] and the forces generated at the toe by [$$F\_{\lambda }(q) :=Df\_{\lambda }^{-T}(q) \tau $$], we have that [$$\begin{aligned} V\_{\lambda }&= \int \_{\lambda f(Q)} \varOmega = \int ... \int \_{\lambda f(Q)} {dx\_1} ... {dx}\_n = \int ... \int \_{f(Q)} {\lambda dy\_1} ... {\lambda dy}\_n \\&= \lambda ^n \int ... \int \_{f(Q)} {dy\_1} ... {dy}\_n = \lambda ^n V\end{aligned}$$] and [$$ F\_{\lambda }(q) = (\lambda Df(q))^{-T} \tau = \frac{1}{\lambda } Df^{-T}(q) \tau = \frac{1}{\lambda } F(q), $$] so that increasing scale has the dual effect of decreasing end effector force magnitude for a given motor torque vector while increasing workspace volume.⁹ References 1. Seok, S., Wang, A., Chuah, M.Y., Hyun, D.J., Lee, J., Otten, D.M., Lang, J.H., Kim, S.: Design principles for energy-efficient legged locomotion and implementation on the mit cheetah robot. IEEE/ASME Trans. Mechatron. 20(3), 1117–1129 (2015)CrossRef 2. Hereid, A., Van Why, J., Kolathaya, S., Hurst, J.W., Jones, M.S., Ames, A.D.: Dynamic multi-domain bipedal walking with atrias through slip based human-inspired control. In: Proceedings of the 17th International Conference on Hybrid Systems: Computation and Control (Part of CPS Week), HSCC 2014, pp. 263–272 (2014) 3. Kenneally, G., De, A., Koditschek, D.E.: Design principles for a family of direct-drive legged robots. IEEE Rob. Autom. Lett. 1(2), 900–907 (2016). http://​ieeexplore.​ieee.​org/​stamp/​stamp.​jsp?​arnumber=​7403902 CrossRef 4. Boston dynamics. http://​www.​bostondynamics.​com 5. Zhao, Q., Sumioka, H., Nakajima, K., Yu, X., Pfeifer, R.: Spine as an engine: effect of spine morphology on spine-driven quadruped locomotion. Adv. Rob. 28(6), 367–378 (2014)CrossRef 6. Pouya, S., Khodabakhsh, M., Sprwitz, A., Ijspeert, A.: Spinal joint compliance and actuation in a simulated bounding quadruped robot. Auton. Rob., 1–16 (article in press, 2016) 7. Culha, U., Saranli, U.: Quadrupedal bounding with an actuated spinal joint. In: Proceedings - IEEE International Conference on Robotics and Automation, pp. 1392–1397 (2011) 8. Cao, Q., Poulakakis, I.: Quadrupedal bounding with a segmented flexible torso: passive stability and feedback control. Bioinspirat. Biomimetics 8(4) (2013) 9. Folkertsma, G.A., Kim, S., Stramigioli, S.: Parallel stiffness in a bounding quadruped with flexible spine. In: IEEE International Conference on Intelligent Robots and Systems, pp. 2210–2215 (2012) 10. Pusey, J.L., Duperret, J.M., Haynes, G.C., Knopf, R., Koditschek, D.E.: Free-standing leaping experiments with a power-autonomous elastic-spined quadruped. In: SPIE Defense, Security, and Sensing, vol. 8741, p. 87 410W. International Society for Optics and Photonics (2013) 11. Khoramshahi, M., Sprowitz, A., Tuleu, A., Ahmadabadi, M.N., Ijspeert, A.J.: Benefits of an active spine supported bounding locomotion with a small compliant quadruped robot. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 3329–3334 (2013) 12. Tsujita, K., Kobayashi, T., Inoura, T., Masuda, T.: Gait transition by tuning muscle tones using pneumatic actuators in quadruped locomotion. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008, pp. 2453–2458 (2008) 13. Duperret, J.M., Kenneally, G.D., Pusey, J.L., Koditschek, D.E.: Towards a comparative measure of legged agility. In: International Symposium on Experimental Robotics, Marrakech/Essaouira, Morocco, June 2016 14. Asada, H., Youcef-Toumi, K.: Direct-Drive Robots: Theory and Practice. MIT Press, Cambridge (1987)MATH 15. Johnson, A.M., Koditschek, D.E.: Legged self-manipulation. IEEE Access 1, 310–334 (2013)CrossRef 16. Park, H.-W., Wensing, P.M., Kim, S., et al.: Online planning for autonomous running jumps over obstacles in high-speed quadrupeds. In: Proceedings of the Robotics: Science and System (RSS), 20-22 June 2015 (to appear) 17. Poulakakis, I., Smith, J.A., Buehler, M.: Experimentally validated bounding models for the scout ii quadrupedal robot. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2004, pp. 2595–2600 (2004) 18. Williams, S.B., Tan, H., Usherwood, J.R., Wilson, A.M.: Pitch then power: limitations to acceleration in quadrupeds. Biol. Lett. 5(5), 610–613 (2009)CrossRef 19. De, A., Koditschek, D.E.: The Penn Jerboa: A platform for exploring parallel composition of templates. Technical report, arXiv:​1502.​05347, http://​repository.​upenn.​edu/​ese\_​reports/​16, February 2015 20. Murray, R.M., Li, Z., Sastry, S.S., Sastry, S.S.: A Mathematical Introduction to Robotic Manipulation. CRC Press, Boca Raton (1994)MATH Footnotes 1 http://​www.​rctigermotor.​com/​.   2 http://​www.​st.​com/​content/​st\_​com/​en/​products/​microcontrollers​.​html.   3 https://​store.​invensense.​com/​.   4 http://​www.​qualisys.​com/​.   5 This imparted a pitching moment on the body that improved the landing.   6 This assumes that all the leg motors operate at near constant torque, which is often a reasonable assumption for direct-drive legged-robot motors given their typical low-speed, torque-limited regime of operation. In these experiments, the motor torque is limited by the power electronics’s 43 A maximum current output, so a U8-16 motor being driven at 12 V hits the speed-torque curve and becomes power-limited when rotating faster than 42 rad/sec. The maximum angular velocity observed on the leg motors was less than 30 rad/sec, so the leg motors never leave their low-speed torque-limited regime of operation.   7 Unlike the legs, the spine motors see speeds as high as 62 rad/sec and thus transition from being torque-limited by the power electronics to being limited by the speed-torque curve. At such high speeds, the maximum torque output is [$$76\,\%$$] of the maximum leg torque output. Increasing the voltage driving the motors would diminish this torque loss.   8 This benefit is doubled when accounting for the fact that the spine can both extend on liftoff and retract on landing to perform useful work over the course of a leap or stride, unlike a leg motor.   9 An established metric for evaluating the ability of a direct-drive limb to generate forces is thermal cost of force (for a normalized motor constant) given by the mean of the squared singular values of the forward kinematic Jacobian [14, page 48], [3]. As shown in the analysis above, in general smaller singular values are achievable by decreasing the length of lever arms in the (possibly parallel) kinematic chain to gain a greater mechanical advantage.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_15 Experiments with Hierarchical Reinforcement Learning of Multiple Grasping Policies Takayuki Osa¹  , Jan Peters¹   and Gerhard Neumann¹   (1) Technische Universität Darmstadt, Hochschulstr. 10, 64289 Darmstadt, Germany     Takayuki Osa (Corresponding author) Email: osa@ias.tu-darmstadt.de   Jan Peters Email: peters@ias.tu-darmstadt.de   Gerhard Neumann Email: neumann@ias.tu-darmstadt.de Abstract Robotic grasping has attracted considerable interest, but it still remains a challenging task. The data-driven approach is a promising solution to the robotic grasping problem; this approach leverages a grasp dataset and generalizes grasps for various objects. However, these methods often depend on the quality of the given datasets, which are not trivial to obtain with sufficient quality. Although reinforcement learning approaches have been recently used to achieve autonomous collection of grasp datasets, the existing algorithms are often limited to specific grasp types. In this paper, we present a framework for hierarchical reinforcement learning of grasping policies. In our framework, the lower-level hierarchy learns multiple grasp types, and the upper-level hierarchy learns a policy to select from the learned grasp types according to a point cloud of a new object. Through experiments, we validate that our approach learns grasping by constructing the grasp dataset autonomously. The experimental results show that our approach learns multiple grasping policies and generalizes the learned grasps by using local point cloud information. Keywords GraspingReinforcement learningPoint clouds 1 Introduction Grasping is a crucial aspect in robot manipulation, and many approaches to achieve robust and adaptive grasping have been proposed in literature [1, 2]. Despite these efforts, robotic grasping has not achieved human-level performance. The data-driven approach yields a promising class of grasping methods [2, 3]. These methods leverage grasp datasets and transfer grasping motions to new target objects based on geometric information. Recent data-driven methods generalize grasps to various objects based on point clouds or RGB-D image data [4–8]. However, the performance of these data-driven methods is highly dependent on the quality of the grasp dataset, which is not easy to obtain with sufficient quality. The manual collection of such a grasp dataset can be avoided if a robotic system autonomously collects the dataset by trial and error. One solution to this problem is to use reinforcement learning [9]. A few methods for autonomous data collection of grasping have been proposed recently [10, 11]. However, these methods are based on convolutaional neural network (CNN) and limited to 2D image inputs lacking depth information. Consequently, these methods are limited to simple grasping motions, e.g., vertical pinch grasps. However, studies in the area of human grasping indicate that multiple grasp types are necessary in order to achieve dexterous and human-like manipulation [12, 13]. In this paper, we present a hierarchical reinforcement learning approach for learning to plan grasping motions based on point clouds. In our approach, the lower-level hierarchy learns multiple grasp types, and the upper-level hierarchy learns a policy to select from the learned grasp types according to the point cloud of the given object. We empirically validate that our approach learns grasping by constructing the grasp dataset autonomously. The experimental results demonstrate that the grasping performance of our approach improves iteratively by updating the grasping policy and the grasp dataset. In addition, from our experiments, we verify that the learned grasping policies can be generalized for various objects by leveraging local features of the given point cloud of the object. 2 Related Work Data-driven methods have been very popular in the field of robotic grasping. Recent studies demonstrated that grasp planning based on point clouds or RGB-D images can be generalized to various objects without solid 3D models [4–8]. However, the performance of these methods depends significantly on the quality of the training dataset of grasping motions and objects. In addition, these methods using RGB-D data are often limited to simple two-finger grippers and specific approach directions. For example, a method in [4] computes Height Accumulated Features (HAF) and detects the grasp locations. This method performs well even in scenarios with multiple objects. However, it requires a training dataset with thousands of grasps. Reinforcement learning is a promising approach for autonomous data generation. The study by Kroemer et al. showed that grasping can be learned and improved autonomously using such a reinforcement learning approach [14]. However, the authors did not completely address the problem of generalizing grasps for new scenes. Recent studies have investigated methods for autonomous large-scale data collection for grasp learning [10, 11]. The method presented in [10] showed the feasibility of autonomously collecting a dataset with thousands of grasps and training a CNN to predict grasp locations. Levine et al. proposed the learning of hand-eye coordination for grasping by using CNN [11]. However, the methods in [10, 11] are limited to a specific grasp type and 2D image input lacking depth information, although use of depth information and learning multiple grasp types are essential to achieve dexterous manipulations. In contrast with previous studies, our approach has three important features: (1) learning multiple grasp types, (2) autonomously constructing the grasp dataset through trial and error, and (3) planning the grasping motion based on point clouds. To the best of our knowledge, previous studies have not proposed a learning method that has includes all these features. 3 Learning Multiple Grasping Policies In order to make our problem tractable, we divide the problem of learning to grasp into four steps: (1) find the potential grasp locations, (2) select the grasp type and location, (3) perform the selected grasp, and (4) update the grasping policy. Our framework is summarized in Fig. 1. The system learns a policy consisting of two layers: the upper-level policy [$$\pi \_{u}$$] selects the appropriate grasp types and grasp locations, and the lower-level policy [$$\pi \_{l}$$] maps the desired grasp location to the grasping motion with a specific grasp type. [] Fig. 1. Overview of the algorithm. First, the grasping policy is initialized, and the dataset of contact information is created based on human demonstration. The individual lower-level policy [$$\pi \_{l}$$] is learned for each grasp type. The grasp quality R is approximated with Gaussian Processes (GPs). GPs are used to evaluate each combination of grasp type and location. When the point cloud of a new object is given, potential grasp locations are estimated using the grasp dataset. Subsequently, the upper-level policy [$$\pi \_{u}$$] selects the grasp type and location. After every grasp execution, the grasping policy and the dataset are updated. The grasping policy is initialized using human demonstrations. For each demonstrated grasp, the system stores the contact information of the successful grasp. Simultaneously, the grasping policy, which consists of [$$\pi \_{u}$$] and [$$\pi \_{l}$$], is initialized based on the demonstrated motions. When a new point cloud of the object is given, the system finds multiple local parts similar to the contact parts in the dataset of the successful grasps. Each grasp candidate is provided to the upper-level policy [$$\pi \_{u}$$], which selects one candidate for execution. Grasping motions for different grasp types are learned as independent policies [$$\pi ^{k}\_{l}$$] by the contextual relative entropy policy search (REPS) algorithm [15–17]. The learned policy [$$\pi ^{k}\_{l}$$] generates the motion parameter [$$\varvec{\theta }$$] using the local features of the estimated grasping part [$$\varvec{s}$$]. The grasp quality R is approximated using Gaussian Processes (GPs) as a function of motion parameter [$$\varvec{\theta }$$] and the feature of the potential grasping part [$$\varvec{s}$$]. Based on the evaluation with the learned GP models, the upper-level policy [$$\pi \_{u}$$] selects the appropriate set of the grasp type and location. We use the upper confidence bound (UCB) objective, which is a well-known acquisition function from Bayesian Optimization (BO) [18–20]. After every grasp execution, the GPs are updated, and the estimation of the grasp quality improves iteratively. Simultaneously, the executed lower-level policy [$$\pi \_{l}$$] is updated using REPS. When the grasping is successful, the contact information is stored in the dataset for the corresponding grasp. Consequently, the dataset containing the contact information of the grasp is constructed autonomously. 3.1 Learning to Select the Grasp Type and the Grasp Location In order to select the grasp type and the grasp location from the given candidates, we need to evaluate the expected grasp quality [$${{\mathrm{\mathbb {E} }}}[R | \pi ^{k}\_{l} , \varvec{s}]$$]. We approximate the grasp quality of the kth grasp type with a GP as a function of the movement parameters [$$\varvec{\theta }$$] and the grasp location features [$$\varvec{s}$$], i.e., [$$\begin{aligned} R^{k}( \varvec{\theta }, \varvec{s} ) \sim \mathcal {GP} \left( m \, \left( \varvec{z} \right) , g \left( \varvec{z}, \varvec{z}' \right) \right) \end{aligned}$$] (1) where [$$\varvec{z} = [\varvec{\theta }, \varvec{s}]^{T}$$]. We use a squared exponential covariance function [$$\begin{aligned} g \left( \varvec{z}\_{i}, \varvec{z}\_{j} \right) = \sigma ^{2}\_{f} \exp \left( - \frac{||\varvec{z}\_{i} - \varvec{z}\_{j} ||^{2} }{ 2l^{2} } \right) + \sigma \_n^2 \delta \_{\varvec{z}\_{i}\varvec{z}\_{j}}, \end{aligned}$$] (2) where l is the bandwidth of the kernel, [$$\sigma ^2\_f$$] is the function variance and [$$\sigma ^2\_n$$] is the noise variance. The hyperparameters of GP models [$$[\sigma ^2\_f, l, \sigma ^2\_n]$$] are updated after every rollout by maximizing the marginal log likelihood [21]. We assume zero prior mean, i.e., [$$m(z) = 0$$]; therefore, joint distribution of the quality measure [$$R\_{1:N}$$] of the training set and the quality measure of a query data point [$$R^\*$$] is Gaussian, i.e., [$$\begin{aligned} \left[ \begin{array}{c} \varvec{R}^{k}\_{1:N} \\ R^{\*} \end{array} \right] \sim \mathcal {N} \left( 0, \left[ \begin{array}{cc} \varvec{G}\_{k} &{} \varvec{g}\_{k} \\ \varvec{g}^{T}\_{k} &{} g(\varvec{z}^{\*}, \varvec{z}^{\*}) \end{array} \right] \right) \end{aligned}$$] (3) where [$$\varvec{G}$$] is the Gram matrix and [$$\varvec{R}^{k}\_{1:N}$$] is a column vector that contains rewards of rollouts with the kth grasp type as [$$\varvec{R}^{k}\_{1:N} = [R^{k}\_{1}, \cdots , R^{k}\_{N} ]^{T}$$]. In this framework, we employ a stochastic policy [$$\pi ^{k}\_{l}( \varvec{\theta } | \varvec{s} ) \sim \mathcal {N}( \varvec{\mu }^{k}(\varvec{s}), \varvec{\varSigma }^{k}(\varvec{s}) )$$]. In order to estimate [$${{\mathrm{\mathbb {E} }}}[ R | \pi ^{k}\_{l}, \varvec{s}]$$] using GPs, we can consider that the inputs of GPs are drawn from the distribution [$$\begin{aligned} \varvec{z}^{\*} \sim \mathcal {N} \left( \varvec{\mu }\_{\varvec{z}^{\*}}, \varvec{\varSigma }\_{\varvec{z}^{\*}} \right) \text { where } \varvec{\mu }\_{ \varvec{z} } = \left[ \begin{array}{c} \varvec{\mu }^{k}(\varvec{s}) \\ \varvec{s} \end{array} \right] , \varvec{\varSigma }\_{ \varvec{z} } = \left[ \begin{array}{cc} \varvec{\varSigma }^{k}(\varvec{s}) &{} 0\\ 0 &{} 0 \end{array} \right] . \end{aligned}$$] (4) To estimate the expected reward for grasp type k when given a context [$$\varvec{s}$$], we need to compute the integral [$$\begin{aligned} p( R^{\*} | \varvec{\mu }\_{\varvec{z}^{\*}}, \varvec{\varSigma }\_{\varvec{z}^{\*}} ) = \int p( R^{\*} | \varvec{z}, D )p( \varvec{z}^{\*} ) d\varvec{z}^{\*} \end{aligned}$$] (5) where D represents the dataset of motion parameters, contexts, and resulting rewards. The studies in [22, 23] showed that the mean and the covariance of this distribution are given by [$$\begin{aligned} \mu \_{R}&= {{\mathrm{\mathbb {E} }}}[ R^{\*} | \pi ^{k}\_{l}, \varvec{s}^{k} ] = \varvec{q}^{T} \varvec{\beta } \end{aligned}$$] (6) [$$\begin{aligned} \sigma \_{R}&= g\_{k} \left( \varvec{z}^{\*}, \varvec{z}^{\*} \right) - \varvec{g}^{T}\_{k}\varvec{G}\_{k}^{-1}\varvec{g}\_{k} + \text {Tr}\left[ \varvec{G}^{-1}\_{k}(\varvec{g}\_{k}\varvec{g}^{T}\_{k} - Q) \right] + \text {Tr}\left[ \varvec{\beta } \varvec{\beta }^{T} ( Q - \varvec{q} \varvec{q}^{T} ) \right] \end{aligned}$$] (7) where [$$\varvec{\beta } = \varvec{G}^{-1}\_{k} \varvec{R}^{k}$$], and the vector [$$\varvec{q}$$] and the matrix [$$\varvec{Q}$$] are given by [$$\begin{aligned} q\_{j}&= \frac{ \exp \left( - \frac{1}{2} ( \varvec{\mu }\_{\varvec{z}^{\*}} - \varvec{z}\_{j} )^{T}( \varvec{\varLambda } + \varvec{\varSigma }\_{ \varvec{z}^{\*} })^{-1} ( \varvec{\mu }\_{\varvec{z}^{\*}} - \varvec{z}\_{j} ) \right) }{ | 2 \varvec{\varLambda } \varvec{\varSigma }\_{ \varvec{z}^{\*} } + I |^{1/2 } }, \end{aligned}$$] (8) [$$\begin{aligned} Q\_{ij}&= \frac{ \exp \left( - \frac{1}{2} \left[ ( \varvec{\mu }\_{\varvec{z}^{\*}} - \varvec{z}\_{d} )^{T}( \frac{\varvec{\varLambda }}{2} + \varvec{\varSigma }\_{ \varvec{z}^{\*} })^{-1} ( \varvec{\mu }\_{\varvec{z}^{\*}} - \varvec{z}\_{d} ) + ( \varvec{z}\_{i} - \varvec{z}\_{j} )^{T} (2\varvec{\varLambda }) ( \varvec{z}\_{i} - \varvec{z}\_{j} ) \right] \right) }{ | 2 \varvec{\varLambda } \varvec{\varSigma }\_{ \varvec{z}^{\*} } + I |^{1/2} }, \end{aligned}$$] (9) where [$$\varvec{z}\_{d} = \frac{1}{2}( \varvec{z}\_{i} + \varvec{z}\_{j} )$$], and [$$\varvec{\varLambda }$$] is a diagonal matrix with [$$\varvec{\varLambda } = l^{2} \varvec{I}$$]. [] These GP models are used to evaluate the grasp locations found for each grasp type. For grasp selection, we must consider the exploration-exploitation trade-off between gaining more information and maximizing the expected quality of the grasp. Such an exploration-exploitation trade-off is considered by many acquisition functions used in BO. We use the UCB [18] acquisition function, which has been shown to perform well in practice. The learner selects the grasp type and location by maximizing the acquisition function [$$\begin{aligned} u(\pi ^{k}\_{l}, \varvec{s}^k) = {{\mathrm{\mathbb {E} }}}[R^{\*} | \pi ^{k}\_{l}, \varvec{s}^{k}] + \beta \sigma \_{R} ( \pi ^{k}\_{l}, \varvec{s}^{k} ), \end{aligned}$$] (10) where [$$\beta $$] is a positive constant that controls the exploration-exploitation trade-off. The algorithm to select the grasp types and locations is summarized in Algorithm 1. 3.2 Finding Potential Grasp Locations [] Fig. 2. (a) and (b):Point cloud of object with contact points. Blue points represent the point cloud of the object [$$\varvec{P}$$]. Red points represent contact points. Green points represent the neighbors of the contact points [$$\varvec{C}$$]. (c) Example of the result of ICP. Blue, green, red, and yellow points represent a partial point cloud [$$\varvec{p}\_{i}$$] of a given object, the contact part [$$\varvec{C}\_{j}$$] from the dataset of successful grasps, the result of ICP algorithm [$$H^{j}\_{\text {icp}} \varvec{C}\_{j}$$], and the estimated grasp part [$$\varvec{p}\_{\text {grasp}}$$], respectively. In order to estimate the potential grasp location from a given point cloud, the system searches for local parts that are similar to the point clouds of the contact parts in the library of the successful grasps [$$\mathcal {D}\_{\text {contact}}$$]. In this process, we use the Iterative Closest Points (ICP) algorithm [24], which finds a homogeneous transformation [$$H\_{\text {icp}}$$] that minimizes the distance between two point clouds. When the point cloud of the target object [$$\varvec{P}\_{\text {target}}$$] is given, the system randomly samples a subset of the point cloud of the new object [$$\varvec{p}\_i \subset \varvec{P}\_{\text {target}}$$]. Subsequently, ICP is performed between [$$\varvec{p}\_i$$] and each contact parts in our dataset [$$\varvec{C}\_j \in \mathcal {D}\_{\text {contact}}$$]. ICP returns the residual distance [$$d\_{\text {icp}}$$] between [$$\varvec{p}\_i$$] and [$$\varvec{C}\_j $$]; therefore, we can determine the successful grasp that is the most similar to the sampled part [$$\varvec{p}\_i$$] from the dataset. Using the result of ICP with the smallest residual distance [$$d^{\*}\_{\text {icp}}$$], we can find the point cloud part that is similar to the grasp part in the dataset. The potential grasp part [$$\varvec{p}\_{\text {grasp}}$$] can be estimated as a point cloud in the neighborhood of [$$H^{j^\*}\_{\text {icp}} \varvec{C}\_{j^\*}$$] from [$$\varvec{P}\_{\text {target}}$$]. Figure 2 shows examples of the contact parts in the dataset and the behavior of ICP. By repeating this local search for different subsets of [$$\varvec{P}\_{\text {target}}$$], we can find multiple potential grasp locations. The process to obtain the grasp locations is summarized in the Algorithm 2. Separate datasets of successful grasps are maintained for different grasp types, and this process is performed for each grasp type. This method does not require the entire point cloud of the target object because it searches for local features of the point cloud. This feature is useful in the planning of grasps in real systems in which complete point clouds of objects are not available. [] In order to obtain a concise description of the local point cloud [$$\varvec{p}\_{\text {grasp}}$$] at the estimated grasp location, we compute the center of the contact points and the normal vector at the center of the contact points for each finger as [$$\begin{aligned} \varvec{s} = [\varvec{x}^{1}\_{\text {center}}, \varvec{n}^{1}\_{\text {center}}, \ldots , \varvec{x}^{f}\_{\text {center}}, \varvec{n}^{f}\_{\text {center}}], \end{aligned}$$] (11) where [$$\varvec{x}^{i}\_{\text {center}}$$] is the center of the contact part of the ith finger, [$$\varvec{n}^{i}\_{\text {center}}$$] is the normal vector at the center of the contact part of the ith finger, and f is the number of fingers of the hand. This local description of contact points [$$\varvec{s}$$] is used as a context in the contextual policy search of the lower-level policies [$$\pi ^{k}\_{l}$$]. 3.3 Learning the Policy for the Desired Grasp Type We use the contextual REPS algorithm [15, 17] to learn the lower-level policies [$$\pi ^{k}\_{l}(\varvec{\theta }| \varvec{s})$$] that estimate the parameters [$$\varvec{\theta }$$] of the grasping motions with the given contexts for each grasp type. In policy search, the policy must be updated in order to maximize the expected reward. For stable exploration, the “difference” between the old and new policies is bounded in the policy update of REPS. Therefore, the resulting policy will remain close to the initial policy even if the reward function is multi-modal. In our framework, each lower-level policy is initialized by human demonstrations, and REPS finds a locally optimal policy that is associated with the grasp type indicated by human demonstrations. REPS uses the KL divergence between the sample distribution [$$q(\varvec{s}, \varvec{\theta } )$$] and the updated distribution [$$\pi (\varvec{\theta } | \varvec{s} )\mu \_{\varvec{s}}(\varvec{s})$$] as a similarity measure in the policy update. [$$\mu \_{\varvec{s}}$$] is the distribution of the context. The policy update using contextual REPS is formulated as a constraint optimization problem, [$$\begin{aligned} \max \_{\pi }&\int \mu \_{\varvec{s}}(\varvec{s} ) \int \pi (\varvec{\theta }| \varvec{s} ) R( \varvec{\theta }, \varvec{s} )d \varvec{\theta } ds \end{aligned}$$] (12) [$$\begin{aligned} \mathrm {s.t. }&\epsilon \ge \int \mu \_{\varvec{s}}( \varvec{s} )\text {KL}\left( \pi ( \varvec{\theta } | \varvec{s} ) || q( \varvec{\theta } | \varvec{s} ) \right) d\varvec{s}, \ \ 1 = \int \pi ( \varvec{\theta } | \varvec{s}) d\varvec{s} \end{aligned}$$] (13) For details, please refer to the original study and its extensions [15, 16]. Contextual REPS models the policy as a Gaussian policy [$$\pi (\varvec{\theta }| \varvec{s}) = \mathcal {N}(\varvec{\phi }(\varvec{s})^T \varvec{w}, \varvec{\varSigma }\_{\varvec{\theta }})$$] with a mean vector [$$\varvec{\mu }\_{\varvec{\theta }} = \varvec{\phi }(\varvec{s})^T \varvec{w}$$] that is linear in the context features [$$\varvec{\phi }(\varvec{s})$$]. We require a policy that is non-linear in the original context [$$\varvec{s}$$] because grasping is a complex task. Therefore, we use a squared exponential feature where we select M random samples [$$\varvec{s}$$] from our dataset, i.e., the ith dimension of [$$\varvec{\phi }$$] is given by [$$\begin{aligned} \phi \_{i} ( \varvec{s}) = \exp \left( - \frac{1}{2} \left( \varvec{s}\_{i} - \varvec{s} \right) ^{T} \varvec{\varLambda }\_{\phi } \left( \varvec{s}\_{i} - \varvec{s} \right) \right) , \end{aligned}$$] (14) where [$$\varvec{\varLambda }\_{\phi }$$] is a diagonal matrix that defines the bandwidth for each element of the context vector [$$\varvec{s}$$]. 4 Experimental Results 4.1 Simulations [] Fig. 3. Objects used to learn multiple grasp types: objects 1 and 2 were used for precision grasp, objects 3 and 4 were used for power grasp, and objects 5 and 6 were used for medium-wrap grasp. In the simulation experiments, the system learned three grasp types: precision grasp, power grasp, and medium wrap [13]. In order to initialize the grasping policy, a human operator specified the control parameters to demonstrate each type of grasping. For each grasp type, 12 demonstrations were given to the system. Then, the grasping policy was initialized, and the system learned to generalize the control parameters for given objects in different positions. During the learning phase, point clouds of objects were provided to the system, and the system autonomously chose the grasp type and location and executed the grasp using the motion parameters [$$\varvec{\theta }$$]. The grasping policy was updated after every grasp execution. We used the model of KUKA Light Weight Robot and DLR/HIT II Hand as a robotic manipulator in the simulation. In the simulation, the motion parameter [$$\varvec{\theta }$$] of the lower-level policies was given as [$$\begin{aligned} \varvec{\theta } = [ \varvec{x}\_{\text {grasp}}, \varvec{x}\_{\text {via}}, \varvec{q}\_{\text {grasp}}], \end{aligned}$$] (15) where [$$\varvec{x}\_{\text {grasp}}$$] is the grasp position of the end-effector in Cartesian space, [$$\varvec{x}\_{\text {via}}$$] is the via point of the end-effector in Cartesian space, and [$$\varvec{q}\_{\text {grasp}}$$] is a quaternion that represents the orientation of the end-effector in the grasp position. The finger configuration was initialized using the human demonstration, and was not included in the motion parameter for the learning phase. We used the contact information of the thumb and index finger of the hand as a context [$$\varvec{s}$$]. Therefore, the context vector [$$\varvec{s}$$] had 12 dimensions. The grasp quality R for each rollout is computed based on the force-closure condition and [$$L^{1}$$] grasp quality measure [25–27] as [$$\begin{aligned} R = c\_{1}Q + c\_{2}\delta \_{\text {FC}}, \end{aligned}$$] (16) where Q is the [$$L^{1}$$] grasp quality measure, and [$$\delta \_{\text {FC}} $$] is equal to 1 when the grasp is force-closure and is equal to zero otherwise. The variables [$$c\_{1}$$] and [$$c\_{2}$$] are positive constants. We compared two policy search methods in the proposed framework. Although we use REPS for the lower policies to learn multiple grasp types in this study, other policy search methods can be used in the proposed framework. We compared Reward-Weighted-Regression (RWR) algorithm with REPS [28]. RWR is a policy search method that performs well for real robot tasks, however, it does not constrain the KL divergence in the policy update. Therefore, a comparison between REPS and RWR indicates the manner in which the KL bound in the policy update influences the proposed framework. With regard to the upper-level policy, we compared [$$\epsilon $$]-greedy policy with UCB [9]. In this simulation, We set [$$\epsilon = 0.05$$]. [] Fig. 4. Performance in simulation. (a) Grasps performed in simulation. (b) Improvement in grasp success rate. (c) Improvement in grasp quality estimation. As shown in Fig. 3, we used six objects. Objects 1 and 2 were used to demonstrate precision grasps, objects 3 and 4 were used to demonstrate power grasps, and objects 5 and 6 were used to demonstrate medium-wrap grasps. In the learning phase, test objects were randomly chosen from these objects. Grasps performed in the simulation are shown in Fig. 4(a). The grasp success rate improved through trials from 67.5[$$\%$$] at the beginning to 94.1[$$\%$$] after 400 trials of grasping (Fig. 4(b)). In addition, the estimation of the grasp quality with GPs improved through trials as shown in Fig. 4(c). The comparison between REPS and RWR shows that the KL bound in the policy update enables efficient exploration in the search space. The differences between REPS+UCB and RWR+UCB and between REPS+EPS and RWR+EPS are statistically significant at the 5[$$\%$$] level. The comparison between UCB and the [$$\epsilon $$]-greedy implied that UCB can deal with the exploration-exploitation trade-off better than the [$$\epsilon $$]-greedy policy in the proposed framework, although the differences between RWR+UCB and RWR+EPS and between REPS+UCB and REPS+EPS were not statistically significant. 4.2 Experiments with a Real Robot We tested whether our learned model can be transferred to a real robotic system. The grasping policy was learned through 400 grasp executions in the simulation described in Sect. 4.1. We used 10 objects as shown in Fig. 6. For each object, grasps were tested five times by changing the object position and orientation. KUKA Light Weight Robot and DLR/HIT II Hand were used for this experiment. The arm and fingers of the robot were controlled by using impedance control. The results of the experiment are summarized in Table 1. The success rate was 90[$$\%$$], and the performed grasps are shown in Fig. 7. Figure 5 shows the steps in the ICP process to find potential grasping parts. Our approach using ICP performed well with partial point clouds obtained from real scenes using the Kinect. [] Fig. 5. Examples of the process using ICP to find local features that are similar to stored successful grasps. In the middle figures, the green dots represent the contact part in the dataset of successful grasps, and the red dots represent the result of ICP. In the right figures, the green dots represent the estimated contact part of the thumb, and the orange dots represent the estimated contact part of the index finger. The shapes of the objects are different from the shapes of the object models used in simulations. Hence, these results show that the learned policy can be used for objects with unseen shapes. In addition, the results demonstrate that the learned policy can successfully plan the grasping motion by using only the partial point clouds of objects. [] Fig. 6. Objects used in the experiment. Table 1. Grasp performance in a real robotic system. The object numbers correspond to the numbers in Fig. 7. [TABLE] [] Fig. 7. Grasps performed in the experiment. The robot chose the appropriate grasp types and successfully executed the grasps using the given point clouds. 5 Discussion In hierarchical policy search, typically, the process of learning the upper-level and lower-level policies simultaneously is not trivial because the behaviors of policies in different layers often influence each other. However, we did not observe undesired behavior in the system. In our framework, [$$R(\varvec{\theta }, \varvec{s})$$] is stationary. Therefore, the estimation using GPs improves as the system performs more rollouts and increases the number of data samples. After the upper-level policy selects the grasp type for the given context, the rest of the process is a standard policy search problem because each lower-level policy is learned independently in our framework. Thus, the behaviors of the lower-level policies are expected to be stable. Although the independence of lower-level policies simplifies the problem, in future work, transferring the policies between different grasp types may lead to a more efficient learning method. Although we used the method described in Sect. 3.2 to find potential grasp parts in point clouds, our framework is not limited to specific methods for finding grasp affordances. Therefore, the existing methods such as [7] can be also used to find grasp affordances in our framework. Experimental results shows that our framework learns multiple grasp types and a policy to select them according to the given objects. However, the grasp types and locations should be selected on the basis on additional factors, such as human preferences and the tasks planned to be performed after grasping. Our framework selects grasps based on only the grasp stability. Therefore, in future work, the selection of grasp types and locations should consider the additional factors. 6 Conclusion In this paper, we presented a framework for hierarchical reinforcement learning of grasping policies. Our approach autonomously constructs the dataset of grasping motions and point clouds of objects through trial and error. The proposed framework learns multiple grasp types and a policy to select from the learned grasp types for the given objects. In contrast with previous studies, our approach is not limited to specific grasp types and leverages local features of point clouds of objects instead of 2D images. We performed experiments with simulations and with a real robot to test the performance of our approach in learning to grasp with a five-finger hand. The experimental results indicate that our approach learns appropriate grasps by autonomously updating the grasping policy and the grasp dataset. In future work, the selection of grasp candidates based on human preferences and other factors must be investigated. References 1. Bicchi, A., Kumar, V.: Robotic grasping and contact: a review. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 348–353 (2000) 2. Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis- a survey. IEEE Trans. Robot. 30(2), 289–309 (2014)CrossRef 3. Goldfeder, C., Allen, P.K.: Data-driven grasping. Autonomous Robots 31, 1–20 (2011)CrossRef 4. Fischinger, D., Weiss, A., Vincze, M.: Learning grasps with topographic features. Intl. J. Robot. Res. 34, 1167–1194 (2015) 5. Kopicki, M., Detry, R., Adjigble, M., Stolkin, R., Leonardis, A., Wyatt, J.L.: One-shot learning and generation of dexterous grasps for novel objects. Intl. J. Robot. Res. (2015) 6. Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Intl. J. Robot. Res. 34, 705–724 (2015) 7. Ten Pas, A., Platt, R.: Localizing handle-like grasp affordances in 3d point clouds. In: International Symposium on Experimental Robotics (ISER) (2014) 8. Gualtieri, M., Ten Pas, A., Saenko, K., Platt, R.: Using geometry to detect grasp poses in 3d point clouds. In: International Symposium on Robotics Research (ISRR) (2015) 9. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (1998) 10. Pinto, L., Gupta, A.: Supersizing self-supervision: learning to grasp from 50k tries and 700 robot hours. In: IEEE International Conference on Robotics and Automation (ICRA) (2016) 11. Levine, S., Pastor, P., Krizhevsky, A., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. CoRR abs/1603.02199 (2016) 12. Napier, J.R.: The prehensile movements of the human hand. J. Bone Joint Surg. 38-B(4), 902–913 (1956) 13. Cutkosky, M.R., Howe, R.D.: Human grasp choice and robotic grasp analysis. In: Venkataraman, S.T., Iberall, T. (eds.) Dextrous Robot Hands, pp. 5–31. Springer, New York (1990) 14. Kroemer, O., Detry, R., Piater, J., Peters, J.: Combining active learning and reactive control for robot grasping. Robot. Autonomous Syst. 9, 1105–1116 (2010)CrossRef 15. Peters, J., Muelling, K., Altun, Y.: Relative entropy policy search. In: AAAI Conference on Artificial Intelligence (AAAI) (2010) 16. Kupcsik, A., Deisenroth, M.P., Peters, J., Loh, A.P., Vadakkepat, P., Neumann, G.: Model-based contextual policy search for data-efficient generalization of robot skills. Artificial Intell. (2014) 17. Deisenroth, M.P., Neumann, G., Peters, J.: A survey on policy search for robotics. Foundations Trends Robot. 21, 388–403 (2013) 18. Auer, P.: Using confidence bounds for exploitation-exploration trade-offs. J. Mach. Learn. Res. 3, 397–422 (2003)MathSciNetMATH 19. Srinivas, N., Krause, A., Kakade, S., Seeger, M.: Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Trans. Inf. Theory 58(5), 3250–3265 (2012)MathSciNetCrossRef 20. Calandra, R., Seyfarth, A., Peters, J., Deisenroth, M.P.: Bayesian optimization for learning gaits under uncertainty. Ann. Math. Artif. Intell. 76(1), 5–23 (2016)MathSciNetCrossRef 21. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, Cambridge (2005) 22. Girard, A., Rasmussen, C.E., Candela, J.Q., Murray-Smith, R.: Gaussian process priors with uncertain inputs - application to multiple-step ahead time series forecasting. In: Advances in Neural Information Processing Systems (2002) 23. Candela, J.Q., Girard, A.: Prediction at an uncertain input for Gaussian processes and relevance vector machines - application to multiple-step ahead time-series forecasting. Technical report, Danish Technical University (2002) 24. Besl, P.J., McKay, N.D.: A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)CrossRef 25. Murray, R.M., Sastry, S.S., Zexiang, L.: A Mathematical Introduction to Robotic Manipulation, 1st edn. CRC Press Inc., Boca Raton (1994)MATH 26. Ferrari, C., Canny, J.: Planning optimal grasps. In: IEEE International Conference on Robotics and Automation (ICRA), vol. 3, pp. 2290–2295, May 1992 27. Pokorny, F., Kragic, D.: Classical grasp quality evaluation: new algorithms and theory. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3493–3500, November 2013 28. Peters, J., Schaal, S.: Reinforcement learning by reward-weighted regression for operational space control. In: International Conference on Machine Learning (ICML) (2007) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_16 Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection Sergey Levine¹  , Peter Pastor¹, Alex Krizhevsky¹ and Deirdre Quillen¹ (1) Google, Menlo Park, USA     Sergey Levine Email: slevine@google.com Abstract We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing. Keywords Deep learningGraspingComputer vision 1 Introduction When humans and animals engage in object manipulation behaviors, the interaction inherently involves a fast feedback loop between perception and action. Even complex manipulation tasks, such as extracting a single object from a cluttered bin, can be performed with hardly any advance planning, relying instead on feedback from touch and vision. In contrast, robotic manipulation often (though not always) relies more heavily on advance planning and analysis, with relatively simple feedback, such as trajectory following, to ensure stability during execution. Part of the reason for this is that incorporating complex sensory inputs such as vision directly into a feedback controller is exceedingly challenging. Techniques such as visual servoing [Siciliano and Khatib 2007] perform continuous feedback on visual features, but typically require the features to be specified by hand, and both open loop perception and feedback (e.g. via visual servoing) requires manual or automatic calibration to determine the precise geometric relationship between the camera and the robot’s end-effector. [] Fig. 1. Our large-scale data collection setup, consisting of 14 robotic manipulators. We collected over 800,000 grasp attempts to train the CNN grasp prediction model. We propose a learning-based approach to hand-eye coordination, which we demonstrate on a robotic grasping task. Our approach is data-driven and goal-centric: our method learns to servo a robotic gripper to poses that are likely to produce successful grasps, with end-to-end training directly from image pixels to task-space gripper motion. By continuously recomputing the most promising motor commands, our method continuously integrates sensory cues from the environment, allowing it to react to perturbations and adjust the grasp to maximize the probability of success. Furthermore, the motor commands are issued in the frame of the robot, which is not known to the model at test time. This means that the model does not require the camera to be precisely calibrated with respect to the end-effector, but instead uses visual cues to determine the spatial relationship between the gripper and graspable objects in the scene. Our method consists of two components: a grasp success predictor, which uses a deep convolutional neural network (CNN) to determine how likely a given motion is to produce a successful grasp, and a continuous servoing mechanism that uses the CNN to continuously update the robot’s motor commands. By continuously choosing the best predicted path to a successful grasp, the servoing mechanism provides the robot with fast feedback to perturbations and object motion, as well as robustness to inaccurate actuation. The grasp prediction CNN was trained using a dataset of over 800,000 grasp attempts, collected using a cluster of similar (but not identical) robotic manipulators, shown in Fig. 1, over the course of several months. Our experimental evaluation demonstrates that our convolutional neural network grasping controller achieves a high success rate when grasping in clutter on a wide range of objects, including objects that are large, small, hard, soft, deformable, and translucent. Supplemental videos of our grasping system show that the robot employs continuous feedback to constantly adjust its grasp, accounting for motion of the objects and inaccurate actuation commands. We also compare our approach to open-loop alternative designs to demonstrate the importance of continuous feedback, as well as a hand-engineering grasping baseline that uses manual hand-to-eye calibration and depth sensing. Our method achieves the highest success rates in our experiments¹. 2 Related Work Robotic grasping is one of the most widely explored areas of manipulation. A complete survey of grasping is outside the scope of this work, and we refer the reader to standard surveys on the subject for a more complete treatment [Bohg et al. 2014], while in this section we primarily discuss data-driven prior grasping methods, which are the most related to the present work. Such methods take a variety of forms, including human-supervised methods that predict grasp configurations [Herzog et al. 2014, Lenz et al. 2015] and methods that predict finger placement from geometric criteria computed offline [Goldfeder et al. 2009]. Both types of data-driven grasp selection have recently incorporated deep learning [Kappler et al. 2015, Lenz et al. 2015, Redmon and Angelova 2015]. Feedback has been incorporated into grasping primarily as a way to achieve the desired forces for force closure and other dynamic grasping criteria [Hudson et al. 2012], as well as in the form of standard servoing mechanisms, including visual servoing (described below) to servo the gripper to a pre-planned grasp pose [Kragic and Christensen 2002]. The method proposed in this work is entirely data-driven, and does not rely on any human annotation either at training or test time, in contrast to prior methods based on grasp points. Furthermore, our approach continuously adjusts the motor commands to maximize grasp success, providing continuous feedback. Comparatively little prior work has addressed direct visual feedback for grasping, most of which requires manually designed features to track the end effector [Vahrenkamp et al. 2008, Hebert et al. 2012]. Our approach is most closely related to recent work on self-supervised learning of grasp poses by [Pinto and Gupta 2016]. This prior work proposed to learn a network to predict the optimal grasp orientation for a given image patch, trained with self-supervised data collected using a heuristic grasping system based on object proposals. In contrast to this prior work, our approach achieves continuous hand-eye coordination by observing the gripper and choosing the best motor command to move the gripper toward a successful grasp, rather than making open-loop predictions. Furthermore, our approach does not require proposals or crops of image patches and, most importantly, does not require calibration between the robot and the camera, since the closed-loop servoing mechanism can compensate for offsets due to differences in camera pose by continuously adjusting the motor commands. We trained our method using over 800,000 grasp attempts on a very large variety of objects, which is more than an order of magnitude larger than prior methods based on direct self-supervision [Pinto and Gupta 2016] and more than double the dataset size of prior methods based on synthetic grasps from 3D scans [Kappler et al. 2015]. Another related area is visual servoing, which addresses moving a camera or end-effector to a desired pose using visual feedback [Kragic and Christensen 2002, Siciliano and Khatib 2007]. In contrast to our approach, visual servoing methods are typically concerned with reaching a pose relative to objects in the scene, and often (though not always) rely on manually designed or specified features for feedback control. To the best of our knowledge, no prior learning-based method has been proposed that uses visual servoing to directly move into a pose that maximizes the probability of success on a given task (such as grasping). 3 Overview Our approach to learning hand-eye coordination for grasping consists of two parts. The first part is a prediction network [$$g(\varvec{I}\_t, \varvec{v}\_t)$$] that accepts visual input [$$\varvec{I}\_t$$] and a task-space motion command [$$\varvec{v}\_t$$], and outputs the predicted probability that executing the command [$$\varvec{v}\_t$$] will produce a successful grasp. The second part is a servoing function [$$f(\varvec{I}\_t)$$] that uses the prediction network to continuously control the robot to servo the gripper to a success grasp. By breaking up the hand-eye coordination system into components, we can train the CNN grasp predictor using a standard supervised learning objective, and design the servoing mechanism to utilize this predictor to optimize grasp performance. In order to train our prediction network, we collected over 800,000 grasp attempts using a set of similar (but not identical) robotic manipulators, shown in Fig. 1. To ensure generalization of the learned prediction network, the specific parameters of each robot varied in terms of the camera pose relative to the robot, providing independence to camera calibration. Furthermore, uneven wear and tear on each robot resulted in differences in the shape of the gripper fingers. Although accurately predicting optimal motion vectors in open-loop is not possible with this degree of variation, as demonstrated in our experiments, our continuous servoing method can correct mistakes by observing the outcomes of its past actions, achieving a high success rate even without knowledge of the precise camera calibration. 4 Grasping with CNNs and Continuous Servoing In this section, we discuss each component of our approach, including a description of the neural network architecture and the servoing mechanism. 4.1 Grasp Success Prediction with Convolutional Neural Networks The grasp prediction network [$$g(\varvec{I}\_t, \varvec{v}\_t)$$] is trained to predict whether a given task-space motion [$$\varvec{v}\_t$$] will result in a successful grasp, based on the current camera observation [$$\varvec{I}\_t$$]. In order to make accurate predictions, [$$g(\varvec{I}\_t, \varvec{v}\_t)$$] must be able to parse the current camera image, locate the gripper, and determine whether moving the gripper according to [$$\varvec{v}\_t$$] will put it in a position where closing the fingers will pick up an object. This is a complex spatial reasoning task that requires not only the ability to parse the geometry of the scene from monocular images, but also the ability to interpret material properties and spatial relationships between objects, which strongly affect the success of a given grasp. A pair of example input images for the network is shown in Fig. 2, overlaid with lines colored accordingly to the inferred grasp success probabilities. Importantly, the movement vectors provided to the network are not transformed into the frame of the camera, which means that the method does not require hand-to-eye camera calibration. However, this also means that the network must itself infer the outcome of a task-space motor command by determining the orientation and position of the robot and gripper. [] Fig. 2. Left: diagram of the grasp sample setup. Each grasp i consists of T time steps, with each time step corresponding to an image [$$\varvec{I}\_t^i$$] and pose [$$\varvec{p}\_t^i$$]. The final dataset contains samples [$$(\varvec{I}\_t^i, \varvec{p}\_T^i - \varvec{p}\_t^i, \ell \_i)$$] that consist of the image, a vector from the current pose to the final pose, and the grasp success label. Right: example input image pair provided to the network, overlaid with lines to indicate sampled target grasp positions. Colors indicate their probabilities of success: green is 1.0 and red is 0.0. The grasp positions are projected onto the image using a known calibration only for visualization. The network does not receive the projections of these poses onto the image, only offsets from the current gripper position in the frame of the robot. Data for training the CNN grasp predictor is obtained by attempting grasps using real physical robots. Each grasp consists of T time steps. At each time step, the robot records the current image [$$\varvec{I}\_t^i$$] and the current pose [$$\varvec{p}\_t^i$$], and then chooses a direction along which to move the gripper. At the final time step T, the robot closes the gripper and evaluates the success of the grasp (as described in Sect. 5), producing a label [$$\ell \_i$$]. Each grasp attempt results in T training samples, given by [$$(\varvec{I}\_t^i, \varvec{p}\_T^i - \varvec{p}\_t^i, \ell \_i)$$]. That is, each sample includes the image observed at that time step, the vector from the current pose to the one that is eventually reached, and the success of the entire grasp. This process is illustrated in Fig. 2. This procedure trains the network to predict whether moving a gripper along a given vector and then grasping will produce a successful grasp. Note that this differs from the standard reinforcement-learning setting, where the prediction is based on the current state and motor command, which in this case is given by [$$\varvec{p}\_{t+1} - \varvec{p}\_t$$]. [] Fig. 3. The architecture of our CNN grasp predictor. The input image [$$\varvec{I}\_t$$], as well as the pregrasp image [$$\varvec{I}\_0$$], are fed into a [$$6 \times 6$$] convolution with stride 2, followed by [$$3 \times 3$$] max-pooling and 6 [$$5 \times 5$$] convolutions. This is followed by a [$$3 \times 3$$] max-pooling layer. The motor command [$$\varvec{v}\_t$$] is processed by one fully connected layer, which is then pointwise added to each point in the response map of pool2 by tiling the output over the special dimensions. The result is then processed by 6 [$$3 \times 3$$] convolutions, [$$2 \times 2$$] max-pooling, 3 more [$$3 \times 3$$] convolutions, and two fully connected layers with 64 units, after which the network outputs the probability of a successful grasp through a sigmoid. Each convolution is followed by batch normalization. The architecture of our grasp prediction CNN is shown in Fig. 3. The network takes the current image [$$\varvec{I}\_t$$] as input, as well as an additional image [$$\varvec{I}\_0$$] that is recorded before the grasp begins, and does not contain the gripper. This additional image provides an unoccluded view of the scene. The two input images are concatenated and processed by 5 convolutional layers with batch normalization, following by max pooling. After the [$$5^\text {th}$$] layer, we provide the vector [$$\varvec{v}\_t$$] as input to the network. The vector is represented by 5 values: a 3D translation vector, and a sine-cosine encoding of the change in orientation of the gripper about the vertical axis.² To provide this vector to the convolutional network, we pass it through one fully connected layer and replicate it over the spatial dimensions of the response map after layer 5, concatenating it with the output of the pooling layer. After this concatenation, further convolution and pooling operations are applied, as described in Fig. 3, followed by a set of small fully connected layers that output the probability of grasp success, trained with a cross-entropy loss to match [$$\ell \_i$$], causing the network to output [$$p(\ell \_i = 1)$$]. The input matches are [$$512 \times 512$$] pixels, and we randomly crop the images to a [$$472\times 472$$] region during training to provide for translation invariance. Once trained the network [$$g(\varvec{I}\_t, \varvec{v}\_t)$$] can predict the probability of success of a given motor command, independently of the exact camera pose. In the next section, we discuss how this grasp success predictor can be used to continuous servo the gripper to a graspable object. 4.2 Continuous Servoing In this section, we describe the servoing mechanism [$$f(\varvec{I}\_t)$$] that uses the grasp prediction network to choose the motor commands for the robot that will maximize the probability of a success grasp. The most basic operation for the servoing mechanism is to perform inference in the grasp predictor, in order to determine the motor command [$$\varvec{v}\_t$$] given an image [$$\varvec{I}\_t$$]. The simplest way of doing this is to randomly sample a set of candidate motor commands [$$\varvec{v}\_t$$] and then evaluate [$$g(\varvec{I}\_t, \varvec{v}\_t)$$], taking the command with the highest probability of success. However, we can obtain better results by running a small optimization on [$$\varvec{v}\_t$$], which we perform using the cross-entropy method (CEM) [Rubinstein and Kroese 2004]. CEM is a simple derivative-free optimization algorithm that samples a batch of N values at each iteration, fits a Gaussian distribution to [$$M < N$$] of these samples, and then samples a new batch of N from this Gaussian. We use [$$N = 64$$] and [$$M = 6$$] in our implementation, and perform three iterations of CEM to determine the best available command [$$\varvec{v}\_t^\star $$] and thus evaluate [$$f(\varvec{I}\_t)$$]. New motor commands are issued as soon as the CEM optimization completes, and the controller runs at around 2 to 5 Hz. One appealing property of this sampling-based approach is that we can easily impose constraints on the types of grasps that are sampled. This can be used, for example, to incorporate user commands that require the robot to grasp in a particular location, keep the robot from grasping outside of the workspace, and obey joint limits. It also allows the servoing mechanism to control the height of the gripper during each move. It is often desirable to raise the gripper above the objects in the scene to reposition it to a new location, for example when the objects move (due to contacts) or if errors due to lack of camera calibration produce motions that do not position the gripper in a favorable configuration for grasping. We can use the predicted grasp success [$$p(\ell = 1)$$] produced by the network to inform a heuristic for raising and lowering the gripper, as well as to choose when to stop moving and attempt a grasp. We use two heuristics in particular: first, we close the gripper whenever the network predicts that [$$(\varvec{I}\_t, \emptyset )$$], where [$$\emptyset $$] corresponds to no motion, will succeed with a probability that is at least [$$90\%$$] of the best inferred motion [$$\varvec{v}\_t^\star $$]. The rationale behind this is to stop the grasp early if closing the gripper is nearly as likely to produce a successful grasp as moving it. The second heuristic is to raise the gripper off the table when [$$(\varvec{I}\_t, \emptyset )$$] has a probability of success that is less than [$$50\%$$] of [$$\varvec{v}\_t^\star $$]. The rationale behind this choice is that, if closing the gripper now is substantially worse than moving it, the gripper is most likely not positioned in a good configuration, and a large motion will be required. Therefore, raising the gripper off the table minimizes the chance of hitting other objects that are in the way. While these heuristics are somewhat ad-hoc, we found that they were effective for successfully grasping a wide range of objects in highly cluttered situations, as discussed in Sect. 6. Pseudocode for the servoing mechanism [$$f(\varvec{I}\_t)$$] is presented in Algorithm 1. [] [] Fig. 4. Images from the cameras of each of the robots during training, with each robot holding the same joint configuration. Note the variation in the bin location, the difference in lighting conditions, the difference in pose of the camera relative to the robot, and the variety of training objects. 5 Large-Scale Data Collection In order to collect training data to train the prediction network [$$g(\varvec{I}\_t, \varvec{v}\_t)$$], we used between 6 and 14 robotic manipulators at any given time. A diagram of one such robot appears on the right, and an illustration of our data collection setup is shown in Fig. 1. We collected about 800,000 grasp attempts over the course of two months, using between 6 and 14 robots at any given point in time, without any manual annotation or supervision. The data collection process started with random motor command selection and [$$T = 2$$], which was used to collect about half of the dataset. For the other half, the network was updated about 4 times, and the number of steps was gradually increased to [$$T = 10$$]. The last command is always [$$\varvec{v}\_T = \emptyset $$] and corresponds to closing the gripper without moving. When executing completely random motor commands, the robots were successful on 10%–30% of the grasp attempts, depending on the current objects. [] The objects were chosen to be common household and office items, and ranged from a 4 to 20 cm in length along the longest axis. Some of these are shown in Fig. 4. The objects were periodically swapped out to increase the diversity of the training data. Grasp success was evaluated using two methods: first, we marked a grasp as successful if the position reading on the gripper was greater than 1 cm, indicating that the fingers had not closed fully. However, this method often missed thin objects, and we also included a drop test, where the robot picked up the object, recorded an image of the bin, and then dropped any object that was in the gripper. By comparing the image before and after the drop, we could determine whether any object had been picked up. 6 Experiments To evaluate our continuous grasping system, we conducted a series of quantitative experiments with novel objects that were not seen during training. The particular objects used in our evaluation are shown in Fig. 5. This set of objects presents a challenging cross section of common office and household items, including objects that are heavy, such as staplers and tape dispensers, objects that are flat, such as post-it notes, as well as objects that are small, large, rigid, soft, and translucent. [] Fig. 5. Previously unseen objects used for testing (left) and the setup for grasping without replacement (right). The test set included heavy, light, flat, large, small, and translucent objects. 6.1 Experimental Setup The goal of our evaluation was to answer the following questions: (1) does continuous servoing significantly improve grasping accuracy and success rate? (2) how well does our learning-based system perform when compared to alternative approaches? To answer question (1), we compared our approach to an open-loop method that observes the scene prior to the grasp, extracts image patches, chooses the patch with the highest probability of a successful grasp, and then uses a known camera calibration to move the gripper to that location. This method is analogous to the approach proposed by Pinto and Gupta [2016], but uses the same network architecture as our method and the same training set. We refer to this approach as “open loop,” since it does not make use of continuous visual feedback. To answer question (2), we also compared our approach to a random baseline method, as well as a hand-engineered grasping system that uses depth images and heuristic positioning of the fingers. This hand-engineered system is described further in the extended version of the paper [Levine et al. 2016]. Note that our method requires fewer assumptions than either of the two alternative methods: unlike Pinto and Gupta [2016], we do not require knowledge of the camera to hand calibration, and unlike the hand-engineered system, we do not require either the calibration or depth images. We evaluated the methods using two experimental protocols. In the first protocol, the objects were placed into a bin in front of the robot, and it was allowed to grasp objects for 100 attempts, placing any grasped object back into the bin after each attempt. Grasping with replacement tests the ability of the system to pick up objects in cluttered settings, but it also allows the robot to repeatedly pick up easy objects. To address this shortcoming of the replacement condition, we also tested each system without replacement, as shown in Fig. 5, by having it remove objects from a bin. For this condition, which we refer to as “without replacement,” we repeated each experiment 4 times, and we report success rates on the first 10, 20, and 30 grasp attempts. [] Fig. 6. Failure rates of each method for each evaluation condition. When evaluating without replacement, we report the failure rate on the first 10, 20, and 30 grasp attempts, averaged over 4 repetitions of the experiment. 6.2 Comparisons The results are presented in Fig. 6. The success rate of our method exceeded the baseline and prior methods in all cases. Without replacement, our method cleared the bin after 30 grasps on one of the 4 attempts, and had only one object left in the other 3 attempts. The hand-engineered baseline struggled to resolve graspable objects in clutter, since the camera was positioned about a meter away from the table, and its performance also dropped in the non-replacement case as the bin was emptied, leaving only small, flat objects that could not be resolved by the depth camera. Many practical grasping systems use a wrist-mounted camera to address this issue [Leeper et al. 2014]. In contrast, our approach did not require any special hardware modifications. The open-loop baseline was also less successful. Although it benefited from the large dataset, which was more than an order of magnitude larger than in prior work [Pinto and Gupta 2016], it did not react to perturbations, movement of objects, and variability in actuation and gripper shape. [] Fig. 7. Left: grasps chosen for objects with similar blue appearance but different material properties. Note that the soft sponge was grasped with a very different strategy from the hard objects. Right: examples of difficult objects grasped by our algorithm, including objects that are translucent, awkardly shaped, and heavy. 6.3 Qualitative Results Qualitatively, our method exhibited some interesting behaviors. Figure 7 shows the grasps that were chosen for soft and hard objects. Our system preferred to grasp softer objects by embedding the finger into the center of the object, while harder objects were grasped by placing the fingers on either side. Our method was also able to grasp a variety of challenging objects, some of which are shown in Fig. 7. Other interesting grasp strategies, corrections, and mistakes can be seen in our supplementary video: https://​youtu.​be/​cXaic\_​k80uM 7 Discussion and Future Work We presented a method for learning hand-eye coordination for robotic grasping, using deep learning to build a grasp success prediction network, and a continuous servoing mechanism to use this network to continuously control a robotic manipulator. By training on over 800,000 grasp attempts from 14 distinct robotic manipulators with variation in camera pose, we can achieve invariance to camera calibration and small variations in the hardware. Our approach does not require calibration of the camera to the robot, instead using continuous feedback to correct errors resulting from discrepancies in calibration. Our experiments demonstrate that our method can effectively grasp a wide range of different objects, including novel objects not seen during training. Our results also show that our method can use continuous feedback to correct mistakes and reposition the gripper in response to perturbation and movement of objects in the scene. One of the most exciting aspects of the proposed grasping method is the ability of the learning algorithm to discover unconventional and non-obvious grasping strategies. We observed, for example, that the system tended to adopt a different approach for grasping soft objects, as opposed to hard ones. For hard objects, the fingers must be placed on either side of the object for a successful grasp. However, soft objects can be grasped simply by pinching into the object, which is most easily accomplished by placing one finger into the middle, and the other to the side. In future work, we plan to further explore the relationship between our self-supervised continuous grasping approach and reinforcement learning, in order to allow the methods to learn a wider variety of grasp strategies from large datasets of robotic experience. At a more general level, our work explores the implications of large-scale data collection across multiple robotic platforms. In the long term, this class of methods is particularly compelling for robotic systems that are deployed in the real world, and therefore are naturally exposed to a wide variety of environments, objects, lighting conditions, and wear and tear. A particularly exciting avenue for future work is to explore how our method would need to change to apply it to large-scale data collection across a large number of deployed robots engaged in real world tasks, including grasping and other manipulation skills. Acknowledgements We thank Kurt Konolige and Mrinal Kalakrishnan for additional engineering and discussions, Jed Hewitt, Don Jordan, and Aaron Weiss for help with hardware, Max Bajracharya and Nicolas Hudson for the baseline perception pipeline, and Vincent Vanhoucke and Jeff Dean for support and organization. References Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis a survey. IEEE Trans. Robot. 30(2), 289–309 (2014)CrossRef Goldfeder, C., Ciocarlie, M., Dang, H., Allen, P.K.: The Columbia grasp database. In: IEEE International Conference on Robotics and Automation (2009) Hebert, P., Hudson, N., Ma, J., Howard, T., Fuchs, T., Bajracharya, M., Burdick, J.: Combined shape, appearance and silhouette for simultaneous manipulator and object tracking. In: IEEE International Conference on Robotics and Automation. IEEE (2012) Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T., Schaal, S.: Learning of grasp selection based on shape-templates. Autonom. Robots 36(1–2), 51–65 (2014)CrossRef Hudson, N., Howard, T., Ma, J., Jain, A., Bajracharya, M., Myint, S., Kuo, C., Matthies, L., Backes, P., Hebert, P.: End-to-end Dexterous manipulation with deliberate interactive estimation. In: IEEE International Conference on Robotics and Automation (2012) Kappler, D., Bohg, B., Schaal, S.: Leveraging big data for grasp planning. In: IEEE International Conference on Robotics and Automation (2015) Kragic, D., Christensen, H.I.: Survey on visual servoing for manipulation. Computational Vision and Active Perception Laboratory 15 (2002) Leeper, A., Hsiao, K., Chu, E., Salisbury, J.K.: Using near-field stereo vision for robotic grasping in cluttered environments. In: Khatib, O., Kumar, V., Sukhatme, G. (eds.) Experimental Robotics. STAR, vol. 79, pp. 253–267. Springer, Heidelberg (2014)CrossRef Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Int. J. Robot. Res. 34(4–5), 705–724 (2015)CrossRef Levine, S., Pastor, P., Krizhevsky, A., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning, large-scale data collection. arXiv preprint (2016). arXiv:​1603.​02199 Pinto, L., Gupta, A.: Supersizing self-supervision: learning to grasp from 50 k tries and 700 robot hours. In: IEEE International Conference on Robotics and Automation (2016) Redmon, J., Angelova, A.: Real-time grasp detection using convolutional neural networks. In: IEEE International Conference on Robotics and Automation (2015) Rubinstein, R., Kroese, D.: The Cross-Entropy Method. Springer, New York (2004)CrossRefMATH Siciliano, B., Khatib, O.: Springer Handbook of Robotics. Springer, Secaucus (2007)MATH Vahrenkamp, N., Wieland, S., Azad, P., Gonzalez, D., Asfour, T., Dillmann, R.: Visual servoing for humanoid grasping and manipulation tasks. In: 8th IEEE-RAS International Conference on Humanoid Robots (2008) Footnotes 1 An extended version of this paper is available online [Levine et al. 2016].   2 In this work, we only consider vertical pinch grasps, though extensions to other grasp parameterizations would be straightforward.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_17 Improving Grasp Performance Using In-Hand Proximity and Dynamic Tactile Sensing Radhen Patel¹  , Jorge Cañardo Alastuey¹   and Nikolaus Correll¹   (1) Department of Computer Science, University of Colorado at Boulder, Boulder, USA     Radhen Patel (Corresponding author) Email: radhen.patel@colorado.edu   Jorge Cañardo Alastuey Email: jorgecanardo@gmail.com   Nikolaus Correll Email: nikolaus.correll@colorado.edu Abstract We demonstrate how low-cost in-hand proximity and dynamic tactile sensing can dramatically improve the reliability of basic manipulation tasks. We use an array of infrared proximity sensors embedded in a transparent elastic polymer and an accelerometer in the robot’s wrist to extract proximity and dynamic tactile information that is inspired by the mechanoreceptors in the human skin. We break the manipulation task down into eight distinct phases and show (1) how proximity information can be used to improve reliability of picking and placing objects, and (2) how dynamic tactile information can be used to discern different phases of grasping. We present experimental results using a Baxter robot involved in a tower construction task. 1 Introduction Human grasping is a sequence of actions that elicit distinct tactile stimuli in the human hand. Some of these events such as contact, lift, or slip have been shown to be detectable by pressure sensors [6, 13]. While there exists a large variety of tactile sensors [4], they are not widely deployed in research and industrial robotic platforms. For example, in the recent Amazon picking challenge, only one out of 25 teams took advantage of contact sensing [3]. We believe the limited use of tactile sensing to have two reasons. First, tactile sensors are expensive and difficult to manufacture. Second, it is not fully clear how to best take advantage of the data [5], but for specific use cases such as slip detection [6] or tactile exploration [8]. We have developed a novel sensor for combined pressure and distance sensing that is low cost and simple to manufacture [12], and which can be retrofitted on a standard Baxter gripper (Fig. 1). In this paper, we explore using this sensor for monitoring a large variety of grasping events and experimentally demonstrate how sensor data can improve grasp success and provide cues that indicate successful completion of a grasp phase. Specifically, we demonstrate how proximity data in the finger and at its tip can be used to improve pre-grasp alignment, detect contact, lift, external disturbances (wrenching and tapping), placement and release, which are shown in Fig. 1. Using simple infrared distance sensors within a robotic gripper for reactive control during the final phase of grasping has been proposed as early as 1985 [1, 7]. In addition to contact/force [14], equipping robotic hands [1, 7] or skin [11] with distance sensors is attractive to improve a grasp during the approach phase. Optical proximity sensors were integrated inside the fingertips of a Barrett Hand in order to perform reactive online grasping [7]. In [10], the fingertips of robot manipulator TUM-Rosie were equipped with proximity sensors to measure the distance to objects. There also exist approaches that use both distance and force simultaneously. In [11], optical proximity and capacitive pressure sensing are combined to realize a multi-modal tactile sensing skin. In this paper, we use a Baxter robot to demonstrate a number of the above capabilities with a simple, low-cost array of combined pressure and distance sensors, thereby significantly improving the robot’s capabilities. [] Fig. 1. Various phases of grasping and associated events, from top left to bottom right: (1) Approach, (2) Alignment, (3) Contact, (4) Lift, (5) Shear/slip, (6) Disturbance, (7) Placement, and (8) Release. 2 Technical Approach The sensor is based on a commodity proximity and ambient light sensor VCNL 4010 (Vishay Semiconductors) that can be arranged in groups of 8 using an [$$I^2C$$] multiplexer (TCA9548A, Texas Instrument). At 100 kHz [$$I^2C$$] bus frequency, a single measurement requires 1470 [$$\mu s$$] including communication, allowing to read a [$$8 \times 8$$] array at 10 Hz and a strip of eight at up to 85 Hz. (In this paper the sensor is read at 20 Hz.) To enable force measurements, the chip is embedded in a thin layer of PDMS (Dow Corning Sylgard 184) that deforms under pressure. Sensor design is described in more detail in [12]. Each finger has seven sensors on its inner side and on sensor in its tip. This paper focuses on the use of the sensor array during grasping. Figure 2 illustrates the data that the sensor generates when holding a “Cubelet”, a modular, magnetic robot construction kit. [] Fig. 2. Actual pressure values when holding a 2-inch cube. The rightmost column shows the sensor values at the tip. 2.1 Bio-inspired Tactile Information Dexterous manipulation in humans can be broken down into a series of action phases that represent sub-goals of the grasping task [13]. These sub-goals correspond to mechanical events that can be detected by in-hand mechanoreceptors. There are four types of mechanoreceptors in the human hand, which can be categorized by their active area and response to static or dynamic stimuli. Nerve endings with small receptive fields are referred to as “Type I” units, and those covering larger areas as “Type II” units. Nerves that respond to static stimuli are denoted by “slowly adapting” (SA), and those that respond to dynamic events are denoted by “fast adapting” (FA). It has been shown in the past that signals that are equivalent to SA-I and FA-I can be extracted from high-bandwidth pressure sensing via appropriate signal processing [13], whereas SA-II like signals that respond to tangential loading or shear can be extracted from a series of pressure sensors [6] and FA-II like signals via wrist-mounted accelerometers [13]. A signal similar to SA-I estimate of the total fingertip force can be obtained by summing all the sensors values on each finger [13]: [$$\begin{aligned} F\_l(t) = \sum \_{i=1}^{7} f\_i(t). \end{aligned}$$] (1) Similarly, to obtain a signal similar to FA-I channel we use a discrete-time first order Butterworth high-pass filter with a cut-off frequency of 5 Hz for a 20 Hz sampling rate of the sensor values. [$$\begin{aligned} \tilde{F}(t) = \sum \_{i=1}^{14} h\_f(t)\*f\_i(t). \end{aligned}$$] (2) FA-II channels in humans let them sense the interaction between hand-held tools and the environment, among others. As in [13], we create a robotic analog FA-II sensory channel by taking the magnitude of the filtered three-dimensional accelerometer data from Baxter’s wrist. The filter applied to each of the three cartesian acceleration components is again a discrete-time Butterworth high-pass filter with a 33 Hz cut-off frequency, experimentally chosen for the 100 Hz sampling rate of our acceleration data stream. [$$\begin{aligned} \tilde{A}(t) = \sqrt{\sum \_{i=x,y,z}(h\_a(t)\*a\_i(t))^2} \end{aligned}$$] (3) 2.2 Pick-and-place Pipeline We use a simple pick-and-place pipeline to evaluate the contribution of proximity and tactile information on (1) improving reliability and (2) detecting grasp state transitions using Rethink Robotics’ Baxter robot and an Asus Xtion Pro, which provides a rough estimate of object pose (Fig. 3). The grasp pipeline is implemented using a state machine that models the states shown in Fig. 1. [] Fig. 3. Left: Experimental setup as seen through the ASUS Xtion Pro. The AR tag serves for calibration. Right: Calibrated view in RViz showing the robot model, superimposed point cloud, and object centers. The calibrated point cloud is shown in Fig. 3, right. We implement a simple perception pipeline using the Point Cloud Library (PCL) to segment the table top employing random sample consensus (RANSAC). Objects on top are then segmented using Euclidean clustering. Albeit this basic approach cannot differentiate objects that are touching each other, it provides the pose of simple cubic objects with an accuracy that is comparable to state-of-the-art approaches for object localization [9] and is limited by the resolution and accuracy of Kinect-like RGB-D sensors. For testing, the user selects an object to be picked up by entering a object number obtained from the perception pipeline. The robot then position one its left arms in a pre-grasp pose using an inverse kinematic solver and then execute the state machine shown in Fig. 1. After successfully grasping the object, the arm retracts back to its pre-grasp pose, and transitions to placing the object. The user can now choose where to place the Cubelet, which could be on top of any existing tower of Cubelets. 3 Results We have designed a series of experiments to demonstrate the use of combined proximity and force data to (1) improve individual steps during the grasping process, and (2) to detect relevant events during grasping. Grasping events: Figure 4 shows the sensor readings converted into SA-I, FA-I and FA-II signals during grasping a block as shown in Fig. 1. The gripper aperture is shown in the first row. A value of 50% corresponds to 2 in. aperture, the diameter of a Cubelet. The SA-I signals from the left and right finger are shown in the second plot. Increase in the SA-I values correspond to decrease in the distance of the fingers to the objects before contact and an increase in the applied pressure after contact. The FA-I estimate is shown in the third plot clearly indicating contact and release events. Shear, which eventually turns into slip, can be observed in both the SA-I and FA-I channels. Finally, the last plot shows an estimate of the FA-II signal, clearly indicating the tapping event, which is ignored by SA-I and only moderately expressed in FA-I as the object does not move relative to the finger (unlike during slip). [] Fig. 4. Time history data for an interaction between the fingersensor on Baxter, object and the environment as shown in Fig. 1. The various grasp events are marked in the figure. Improving Pre-grasp: Proximity information offer the possibility for improving the end-effector pose before contact is made. For instance grasps where both fingers need to simultaneously make contact. For example when removing a block from a tower, making contact with only one finger might topple the tower. Similarly, when picking up a marble, making contact with only one side of the gripper might let the object roll out the robot’s reach. To demonstrate this capability, we employ a simple proportional controller on the raw proximity values to center the object within the gripper before grasping. We use 1-inch cubes from the Yale-CMU-Berkeley (YCB) Object and Model Set [2] that are presented as a tower (Fig. 5, left) that the robot has to deconstruct. We investigate this situation by hard-coding a correct approach position and try to grasp the object 20 times each with and without taking advantage of the proximity sensors. We perform 7 such trials with additive uniform random noise of [0.1,0.7]cm with a std. dev. of 0.1cm. The results are shown in Fig. 5 (right). We clearly observe a drastic decrease in the performance without the sensor feedback on addition of only little amounts of noise and a considerably robustness against a [$$\pm 0.5$$] cm of noise with the sensor feedback. When noise is increased further, the uncertainty in the position is too high and a successful grasp would require additional sensing and control. [] Fig. 5. Left: Experimental Setup. Middle: Schematic drawing of the experimental setup. Closing the gripper might topple the top building block unless the gripper is centered, here by moving left. Right: Success rates for grasp trials with increasing noise in the pre grasp position (drawn from a uniform random variable). Contact events: Figure 6 shows the sensor signals during a contact event, that is zero distance and zero force. This signal can be observed as a high frequency event that passes the highpass filter (Eq. 2) in Fig. 6, left, whereas the inflection point in the reading when the sensor changes from the proximity to the force domain is shown in the pressure values in Fig. 6, right (Fig. 7). [] Fig. 6. Signals during a contact event, where the gripper closes against a cubelet. The time scale is the same as in the snapping event (Fig. 8), but the dependent axis is scaled differently, as the changes are orders of magnitude greater. [] Fig. 7. FA-I signals during a placement event, where a block held by the gripper touches the table. The time scale is the same as in the snapping event (Fig. 8). On the left, a single event. On the right, the overlap of seven such events, aligned using their cross-correlation. Placement: Placing a cube on the table should lead to shear sensation similar to that induced by an external force as shown in Fig. 1. Figure 8 shows sensor data when placing a cube onto a table, which is indicated by a peak in FA-I. In this experiment, the robot kept pressing the cube onto the table, leading to vibration, resulting in continuing, periodic excitation of the FA-I signal. The amplitude of the signal depends on the velocity with which the robot places the cube. This becomes clear when recording a magnetic latching event on the Cubelets, which are connected by strong Neodymium magnets (Fig. 8), and “snap” together forcefully once brought together closely. [] Fig. 8. Signals during a snapping event, where a cubelet snaps into place against another one held by the enhanced gripper. System evaluation: To evaluate sequences of object picking and placing events via a robotic arm and state the contribution of the sensors at improving the individual grasping events. We compare the reliability of a tower construction task using (1) only RGB-D information from the camera and feed-forward arm control with (2) complementing position information with tactile sensing and reactive control. In the standard approach with no feedback, we directly use the object poses returned by the perception pipeline to calculate an inverse kinematic solution. While in the experiments with sensor feedback we are additionally using the proximity information from the tip sensors to validate the location of the tower and make appropriate local motions. All the experiments were run with the same hand-to-eye calibration, and the differences in performance are solely attributed to the tactile/proximity sensors. Every experiment starts with a set of Cubelets distributed over a table. We consider a trial successful when a 3-level high tower has been built, and failed when either grasping a Cubelet or placing it successfully fails. It turns out that this task is quite difficult for an inaccurate system consisting of RGB-D sensing and the Baxter robot. The success rate of constructing a tower two level high was merely 20% (2/10 trials). Here, the typical failure mode were mainly attributed to the approach position being too high, leading to a failed grasp or the cube falling off the tower, and sometimes to the approach position being too low, leading to collisions with the table or the tower itself (Fig. 9). [] Fig. 9. Left: Schematic drawing of the tower construction task. If the gripper is not centered over the tower, the block might topple after placement. If the pre-grasp position is too low, the tower might fall. Finally, if the pre-grasp position is to high, the added block might fall. Right: Success rate to build towers of height two and three with and without tactile sensing. The failure modes in the trials with sensor feedback are fairly different. Picking is essentially always successful: the finger sensors are capable of correcting the uncertainty (centering) in the poses provided by the vision system, and all the grasps are consistently flush against the table. Most failures were observed when placing the Cubelets. A common cause was due to the magnetic snaps of the Cubelets. When the arm is trying to center the block on top, the magnetic forces between cubes may drag the existing tower, making it impossible to center the next block. A similar failure happens when the tower rotates a few degrees and then snaps, but the faces are no longer aligned. 4 Discussion We have shown how proximity sensing alone can drastically improve grasp performance by improving alignment and preventing collisions. In particular, manipulation might fail at each individual sub-task, making it exponentially harder to complete longer grasping sequences. This problem can only be alleviated by introducing error detection and correction, which we wish to investigate in further work. Given appropriate calibration, proximity data might also be used to register the gripper motion against a known 3D model of an object [9], which can then be used to adjust the end-effector’s pose. Similarly, machine learning could be used to train a classifier on the expected performance of a grasp using proximity/tactile data alone, allowing the end-effector to optimize its pose reactively. Experiments in this paper have been triggered manually. While the contact event can be clearly discerned and easily detected by simple logic, the placement signal is weaker and depends on how gentle the robot places the object. Training appropriate classifiers to automatically discern this event is subject of further work. Information provided by the FA-I and FA-II channels could also be used to detect successful lifting events, as the load characteristics abruptly change once the object has been lifted from the table. The objects used here are too lightweight, however, they elicit a measurable response. A channel that we did not explore in this paper is the SA-II signal, which is excited by shear forces and slip. SA-II-like signals could be extracted from temporal differences across neighboring sensors [6], which is required to determine when shear turns into slip. 5 Conclusion We have demonstrated application of a low-cost sensing modality for measuring both proximity and dynamic tactile information during an 8-step grasping sequence. Despite its simplicity and low-cost, the proposed sensor provides unique and repeatable signatures for a large number of events ranging from contact, release, and disturbances by wrenching and tapping. At the same time, the sensor is able to measure distance, which allows to correct pre-grasp pose. The sensor reaches its limitations when detecting loading for light objects such as small wooden cubes or the Cubelets as larger forces are required to lead to a detectable displacement of the object in the gripper. In future work, we wish to use the observed signatures to automatically drive the grasping process. References 1. Balek, D., Kelley, R.: Using gripper mounted infrared proximity sensors for robot feedback control. In: Proceedings of the International Conference on Robotics and Automation (ICRA), vol. 2, pp. 282–287. IEEE (1985) 2. Calli, B., Walsman, A., Singh, A., Srinivasa, S., Abbeel, P., Dollar, A.M.: Benchmarking in manipulation research: The YCB object and model set and benchmarking protocols (2015). arXiv preprint: arXiv:​150203143 3. Correll, N., Bekris, K.E., Berenson, D., Brock, O., Causo, A., Hauser, K., Okada, K., Rodriguez, A., Romano, J.M., Wurman, P.R.: Analysis and observations from the first Amazon picking challenge. IEEE Trans. Autom. Sci. Eng. (2016) 4. Dahiya, R.S., Metta, G., Valle, M., Sandini, G.: Tactile sensing: from humans to humanoids. IEEE Trans. Robot. 26(1), 1–20 (2010)CrossRef 5. Dahiya, R.S., Mittendorfer, P., Valle, M., Cheng, G., Lumelsky, V.J.: Directions toward effective utilization of tactile skin: a review. IEEE Sens. J. 13(11), 4121–4138 (2013)CrossRef 6. Heyneman, B., Cutkosky, M.R.: Slip classification for dynamic tactile array sensors. Int. J. Robot. Res. 35(4), 404–421 (2015)CrossRef 7. Hsiao, K., Nangeroni, P., Huber, M., Saxena, A., Ng, A.Y.: Reactive grasping using optical proximity sensors. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2098–2105 (2009) 8. Hsiao, K., Kaelbling, L.P., Lozano-Pérez, T.: Task-driven tactile exploration. In: Robotics: Science and Systems Conference (2010) 9. Ma, L., Ghafarianzadeh, M., Coleman, D., Correll, N., Sibley, G.: Simultaneous localization, mapping, and manipulation for unsupervised object discovery. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1344–1351 (2015) 10. Maldonado, A., Alvarez, H., Beetz, M.: Improving robot manipulation through fingertip perception. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2947–2954. IEEE (2012) 11. Mittendorfer, P., Yoshida, E., Cheng, G.: Realizing whole-body tactile interactions with a self-organizing, multi-modal artificial skin on a humanoid robot. Adv. Robot. 29(1), 51–67 (2015)CrossRef 12. Patel, R., Correll, N.: Integrated force and distance sensing for robotic manipulation using elastomer-embedded commodity proximity sensors. In: Robotics: Science and Systems, Ann Arbor, MN (2016) 13. Romano, J.M., Hsiao, K., Niemeyer, G., Chitta, S., Kuchenbecker, K.J.: Human-inspired robotic grasp control with tactile sensing. IEEE Trans. Robot. 27(6), 1067–1079 (2011)CrossRef 14. Tenzer, Y., Jentoft, L.P., Howe, R.D.: The feel of MEMS barometers: inexpensive and easily customized tactile array sensors. IEEE Robot. Autom. Mag. 21(3), 89–95 (2014)CrossRef Manipulation © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_18 Learning Object Orientation Constraints and Guiding Constraints for Narrow Passages from One Demonstration Changshuo Li¹   and Dmitry Berenson² (1) Worcester Polytechnic Institute, Worcester, USA (2) University of Michigan, Ann Arbor, USA     Changshuo Li Email: cli7@wpi.edu Abstract Narrow passages and orientation constraints are very common in manipulation tasks and sampling-based planning methods can be quite time-consuming in such scenarios. We propose a method that can learn object orientation constraints and guiding constraints, represented as Task Space Regions, from a single human demonstrations by analyzing the geometry around the demonstrated trajectory. The key idea of our method is to explore the area around the demonstration trajectory through sampling in task space, and to learn constraints by segmenting and analyzing the feasible samples. Our method is tested on a tire-changing scenario which includes four sub-tasks and on a cup-retrieving task. Our results show that our method can produce plans for all these tasks in less than 3 min with 50 / 50 successful trials for all tasks, while baseline methods only succeed 1 out of 50 times in 30 min for one of the tasks. The results also show that our method can perform similar tasks with additional obstacles, transfer to similar tasks with different start and/or goal poses, and be used for real-world tasks with a PR2 robot. Keywords Learning from demonstrationConstraints learningManipulation planning This work was supported in part by the ONR grant N00014-13-1-0735. 1 Introduction Many manipulation tasks, such as changing the tire on a car, require several operations where the robot must navigate an object through a narrow passage (e.g. removing a nut from a stud or removing the tire from the hub). These narrow passages are induced by the geometries of the objects. There are also some manipulation tasks where pose constraints must be obeyed, such as not tilting a cup of water. These requirements further constrain the motion of the object. While motion planning algorithms capable of performing such tasks exist, they cannot easily be biased to search the relevant parts of the space when the constrains come from narrow passages or require manual input of pose constraints. These methods are either time-consuming or require significant domain knowledge on the part of the user. In this paper we propose to learn the relevant area of the C-space to search from human demonstration of the task. Our framework is able to do this, not by attempting to follow the demonstration, but instead by inferring constraints from it; exploring regions around it by sampling, analyzing the geometric properties of the samples, and then extracting a relevant feasible area represented as a set of Task Space Regions (TSRs) [4]. The TSRs derived from this analysis allow us to generate plans for similar tasks much faster than planning without such constraints in the case of narrow passages and allow performing the task without violating pose constraints on the object. Our key contribution is that instead of requiring multiple demonstrations to form statistical models of how the object should move [2, 15] we learn pose constraints (specifically constraints on the object’s orientation necessary for the task) and guiding constraints (i.e. constraints that limit our search space to only the relevant parts of the C-space) from a single demonstration. The results show that the constraints we learn allow planning for tire-changing and cup-retrieval tasks which outperforms other methods in terms of computation time and success rate. These constraints can also transfer to similar tasks, and can be used for real-world tasks with a PR2 robot. The remainder of this paper is arranged as follows: Sect. 2 reviews related work in this area. We describe the problem we address in Sect. 3. Our method is then explained in detail in Sect. 4. Results of experiments in simulation and real world mock-ups are presented in Sect. 5. Finally, we discuss some drawbacks of our method, and conclude the paper in Sect. 6. 2 Related Work Previous methods for learning constraints from demonstration can be divided into two classes according to the type of input: (1) Kinesthetic demonstrations, such as in [1, 11, 15]. The advantage of this kind of input is that the task is demonstrated in the robot’s C-space directly, so there is no retargeting problem; but this kind of input requires the demonstrator to have a good understanding of the robot’s kinematics, especially in high-DOF cases, otherwise the demonstration can be noisy and redundant. (2) Natural human motion, such as in [3, 9] with some also gathering verbal comments, e.g. [10]. In this case, we only need the demonstrator to act naturally, but retargeting the demonstrated motion is a challenge. Most methods that learn constraints from demonstration require multiple demonstrations, which are often used to compute the variance along the trajectory. Some additional pre-processing, such as data alignment [1, 15], is also needed. Given the aligned data, represent the multiple demonstration trajectories [5] or key frames [1] as Gaussian Mixture Models (GMMs), and then a solution for the task is given by Gaussian Mixture Regression (GMR). The drawback of these methods is that they do not generalize to new environments where the task is similar but new obstacles are present and/or the start/goal are moved. To overcome this limitation, [15] learns a cost function from multiple demonstrations, and uses a sampling-based planner to find a feasible path with low cost. Others have explored using a linear-chain Conditional Random Field paired with motion-based features to detect and extract the rigid constraints which arise between pairs of objects [3, 9]. After that they use an interactive GUI to refine the learned constraints. In our approach we wish to have the user provide the minimum information possible, so we limit our input to the demonstration alone. What distinguishes our work from those above is (1) We learn from only a single demonstration; (2) Our method does not require any input beyond the demonstration; (3) Our method does not require transferring a human motion to the robot (i.e. solving the retargeting problem); (4) The constraints we learn can transfer to similar tasks (i.e. a different start/goal pose or additional obstacles; and (5) Our method scales to tasks in SE(3), such as changing a tire. 3 Problem Statement A task is defined by a moving object, a reference object, a start pose and a goal pose of the moving object w.r.t. the reference object. The information we extract from the human demonstration is a trajectory of the moving object in the task space, which is SE(3). This trajectory should be feasible and should connect the start pose and the goal pose of the task. So the input of our algorithm is a trajectory of poses of the moving object, the id of the moving object, the id of the reference object and the geometric model of the demonstration environment. The output of our algorithm is a pose constraints represented as Task Space Region (TSR)¹ [4] in the world frame and a series of guiding constraints represented as TSRs in the frame of the reference object. The goal of our algorithm is to learn pose constraints and guiding constraints from the input, which will allow us to achieve fast planning across similar tasks. A similar task is defined as a feasible task which has the same moving object and reference object as the demonstrated task and either the start transform or the goal transform of the moving object w.r.t. the reference object is the same as the demonstrated task. The environment may or may not have a different arrangement of obstacles. 4 Technical Approach Our algorithm is described in Fig. 1. First, we capture the demonstration motion using a motion capture system. From the demonstrated series of poses, we learn the pose constraints, which represent the range of orientations the moving object can have in this task. We then seek to learn the guiding constraints. As part of this process we calculate the ratio of feasible object poses samples vs. total samples around the demonstrated trajectory by rejection-sampling poses around every demonstrated pose of the moving object. The demonstration trajectory is then segmented based on this ratio. After segmentation, a guiding constraint is learned from the samples of each segment. These constraints are represent by TSRs. Finally the learned guiding constraints (TSRs) are input into the sampling-based planner CBiRRT [4] to generate a path for a robot to perform the task. We describe each step in detail below. [] Fig. 1. A diagram of our constraint learning algorithm 4.1 Learning Pose Constraints from Demonstrated Poses Unlike geometric constraints, which are induced by the geometries of the objects in the environment, pose constraints are often induced by additional requirements of the task, such as not tilting a cup of water. In many of these cases, the pose constraints are only related to the orientation of the moving object, such as keeping the cup upright regardless of its position. Thus our pose constraints are the allowable range of orientations of the moving object. We choose the Euler angles (Roll, Pitch and Yaw) as the parametrization of the orientations. Similar to Principle Component Analysis (PCA), we wish to find a reference frame in which the principal components or components with the largest weight are aligned with the axes. But the Euler angle space is not a linear space, so instead we try to find a reference frame in which the volume of the Axis-Aligned Bounding Box (AABB) of the demonstrated orientations is the smallest. A Random Volume Decent method is used to achieve this: First, we start with an arbitrary reference frame, and calculate the volume of the AABB in this frame. Then, we apply a small random rotation to the reference frame and get the volume of the AABB in this new frame. If the volume decreases, we start from this new frame and try to find a better one; otherwise we go back to the old reference frame, generate a new small random rotation, and try again. The process terminates when we cannot find a better reference frame after T consecutive trials. In the resulting reference frame, we calculate the range of the demonstrated orientations in all three dimensions (Roll, Pitch and Yaw). Then we say a dimension is unconstrained if the range of this dimension is greater than an angle [$$\alpha $$]; otherwise this dimension is constrained and the orientation of the moving object should not go out of the range of this dimension. These rotation constraints are then input into a TSR which has unbounded position constraints to produce the pose constraint for the task. [] Fig. 2. The grey region is an obstacle, the blue cross is the demonstrated pose, the red circles are the feasible sampled poses, the black dashed lines show the sampling range. In this case, the samples are divided into two class, the left and the right. Only the left class can be reached from the demonstrated pose, so samples in the right class are also treated as infeasible. 4.2 Exploring Task Space Near the Demonstration Trajectory We sample poses of the moving object around each of the poses in the demonstration trajectory in order to explore the feasibility of the local task space so that we can compute appropriate guiding constraints. The sampling can be considered as a random rotation and translation of the moving object in the demonstrated pose frame. The random translation is uniformly chosen from a cube centered at the origin of [$$\mathbb {R}^3$$], and the size of the cube is decided by the size of the moving object. Here we use unit quaternions as the parametrization of the rotations. A random rotation is generated by a uniformly random spin around a uniformly random axis. We intentionally use this random axis-angle method instead of the more common uniformly-random quaternion method [13] because the axis-angle method’s distribution is denser around the demonstrated pose than the uniformly-random quaternion. This is useful in our application because the nearer a pose to the demonstrated pose, the higher chance it is feasible and informative. We then discard a pose sampled in this way if it is outside the learned pose constraint described in Sect. 4.1. Next, we check collision for each sampled pose to evaluate feasibility. Both the sampled pose and its feasibility are recorded for later use. The sampling terminates when either the number of feasible samples or the number of total samples reaches a predetermined threshold. After sampling, we need to examine the connectivity of the feasible samples (not all feasible poses are reachable from the demonstrated pose due to obstacles). This is illustrated in Fig. 2. The connectivity is examined by a local planner, which tries to connect samples by a straight line in SE(3). This connectivity check can divide the feasible samples into several classes. Due to the inconsistency between the real world and the simulation environment, the demonstrated poses may not always be feasible. Thus if the demonstrated pose is feasible, then the set of samples that can be connected to the demonstrated pose by the local planner is chosen to be the feasible connected class; otherwise, the class nearest to the demonstrated pose is chosen. Finally we calculate the feasible sample ratio, the ratio of the number of samples of the feasible connected class vs. the total number of samples. We compute this sample ratio for every demonstrated pose, thus obtaining a series of ratios for the entire trajectory. 4.3 Trajectory Segmentation We wish to segment the trajectory using the feasible sample ratio because different regions of a task have significantly different constraints, and they should be represented by different TSRs. For example, when removing a nut from a stud, the nut is highly constrained when it is on the stud, and is free when it is off the stud. The key to segmenting the trajectory into regions where different guiding constraints are active is to identify points where there are significant changes in the feasible sample ratio. To do this, we represent the feasible sample ratio as a time-varying signal across the duration of the demonstrated trajectory. We then smooth this signal using Total Variation Denoising (TDV) [12]. TDV has the advantage that it smooths noise in relatively flat stages while not shifting step edges [14]. Each step change in the smoothed signal implies a significant change of the feasible sample ratio and represents the start of a new segment. We then need to fit a series of step functions to the smoothed signal (i.e. a “staircase” function). Each flat region of the staircase is a segment. Given a number of stairs k, the fitting problem can be formulated as [8]: [$$\begin{aligned} Y\_n = X\_n\beta ^n+\epsilon \_n=X\_nN\_k\beta ^k+\epsilon \_n \;, \end{aligned}$$] (1) where n is the length of the signal, [$$Y\_n$$] is a n-by-1 vector of the feasible sample ratios, [$$X\_n$$] is a n-by-n lower triangular matrix with non-zero elements equal to one, [$$\epsilon \_n$$] is a n-by-1 vector of the residual error, and [$$\beta ^n$$] is a n-by-1 vector having all its components equal to zero except those k numbers corresponding to the start of a new stair. We then rewrite [$$\beta ^n$$] as [$$N\_k\beta ^k$$], where [$$\beta ^k$$] is a k-by-1 vector of all k non-zero elements, and [$$N\_k$$] is a n-by-k matrix, each column of which is one of the trivial orthonormal basis of [$$\mathbb {R}^n$$]. Then the optimal k-stair function is the one has the minimum residual squared error over all possible [$$N\_k$$] and [$$\beta ^k$$]: [$$\begin{aligned} \hat{N\_k}=\mathop {\text {argmin}}\limits \_{N\_k}\{\min \_{\beta ^k}\{(Y\_n-X\_nN\_k\beta ^k)^T(Y\_n-X\_nN\_k\beta ^k)\}\} \;. \end{aligned}$$] (2) For a given [$$N\_k$$], the inner optimization is a trivial linear fitting problem. A straightforward way to solve the above optimization problem is to run over all possible [$$N\_k$$], which means to try all possible k-combinations of the trivial orthonormal basis of [$$\mathbb {R}^n$$]. Note that there is always a stair at the first point, so [$$[1,0,\cdots ,0]^T$$] should always be included in the k-combinations. We set an upper limit on k to test, which is defined as [$$K\_{max}$$]. We then need to determine the optimal k, which we call [$$k^\*$$]. Inspired by [7], we assume that as k grows larger, the improvement of total residual square error should be large if [$$kk^\*$$]. Thus the optimal [$$k^\*$$] is found by successively fitting with larger k values until a large drop in improvement is observed. To avoid over-fitting, we stop testing larger ks when the residual squared error is less than [$$1\%$$] of the total squared error. The residual square error at k is: [$$\begin{aligned} \sigma ^2\_k=\min \_{\beta ^k}\{(Y\_n-X\_n\hat{N\_k}\beta ^k)^T(Y\_n-X\_n\hat{N\_k}\beta ^k)\} \;. \end{aligned}$$] (3) Then we define the improvement at k as: [$$\begin{aligned} I\_k=\sigma ^2\_{k-1}/\sigma ^2\_k \;. \end{aligned}$$] (4) And the optimal [$$k^\*$$] satisfies: [$$\begin{aligned} k^\*=\min \{k|I\_{k+1} \epsilon > 0$$]), there exists a graph distance metric [$$\delta $$] ([$$\delta > \epsilon $$]) such that the likelihood of a trajectory passing through [$$\epsilon $$] given that it passed through [$$\delta $$] is greater than [$$\frac{1}{number of humans}$$]. The ratio [$$\frac{\delta }{\epsilon }$$] serves as a confidence measure for whether the state is controlled. 2.3 Human Contact-Control Strategy A contact manipulation event requires a control strategy to realize a sequence of contact states between two objects, starting in some contact state (usually free space), passing through many intermediate states, and ending in some final state (usually a desired stable configuration). A human contact-control strategy maps the current state of a manipulated object, [$$q(t), \dot{q}(t)$$], the final contact state [$$s\_f$$], and measured interaction forces f(t), to an applied control force [$$f\_c(t)$$]. Applying a control strategy results in a traversal of the contact-state graph. A human control strategy is thus the mapping, [$$(s\_f, f(t), q(t), \dot{q}(t)) \rightarrow f\_c(t + \delta \,t)$$]. The present contact state is a deterministic function of the object configuration, [$$s(t) = state(q(t))$$]. However, this function may not be known to a human. Here, we focus on the role of contact-state control in the model for human contact strategies, with the ultimate goal being to completely identify the human’s control strategy. We make the following assumptions in our subsequent analysis of human contact-control strategies: (1) After an initial training period, the change in human contact-control strategies is small; (2) Different humans have similar contact-control strategies when at contact states that are visited more times than the third quartile; (3) It is feasible to study human contact-control strategies with physics simulations that provide realistic visual and haptic feedback; (4) The control force response time, [$$\delta \,t$$], is empirically assigned to the median human neural signal transmission plus response return times [20]. Having developed semantics that allow us to describe contact-control, we proceeded to test the following three hypotheses for human contact-control: - (h1) Contact-waypoint hypothesis. The human contact-control strategy involves tracking pre-planned contact-state waypoints, resulting in graph traversals that tend to be deterministic. As a consequence, this hypothesis would predict that each human has a set of strategies to draw upon while traversing the graph, and that the number of possible unique traversals would grow linearly with the number of subjects. Moreover, this hypothesis would also predict that each contact-waypoint (a controlled state) would be visited roughly the same number of times by humans while they traverse the contact graph. As such, we can reject this hypothesis if the traversal counts of the majority of controlled states differ by at least [$$\mathcal {O}(n\_{subjects})$$] for a small multiplicative constant. Alternate evidence would be the lack of repeatability in human contact trajectories across demonstrations. - (h2) Controlled subgraph hypothesis. Humans explicitly control some subsets of states in the graph encountered during a traversal, in addition to the pre-determined initial and final goal states. If this hypothesis is true, we expect clusters of controlled graph states to emerge in the analysis. - (h3) State policy hypothesis. Humans use a control policy where the only controlled states are the starting state for the task and the goal state. This hypothesis can be rejected if there exists another controlled state besides the initial and final. Note that testing our hypotheses requires empirical data from a task of sufficient complexity, which led us to analyze data from the L- and S-object tasks instead of the box-on-plane task. The complexity of an experiment to reject the contact-waypoint hypothesis, for instance, should ideally limit the ability of humans to emulate other humans’ contact-state trajectories even if they have the identical contact-control strategy. That is, whether or not they have an identical controller, it should be likely that they will end up exploring different parts of the contact-state graph. 2.4 Analyzing Controlled States Performing the hypotheses tests requires a method to analyze controlled states in the human demonstration data. If a state [$$s\_i$$] is controlled, it is likely to be visited frequently across graph traversals from the human trials. Additionally, if a region in a graph is entered, it is likely that the graph trajectory will also pass within a small distance [$$\epsilon $$] of the state if it is controlled. In this paper, [$$\epsilon $$] was chosen to be zero, as identifying appropriate distance metrics between contact states is non-trivial and will be explored in future work. To discover possibly controlled states in the human graph traversals, a cluster of states [$$C\_i$$] is formed around each state [$$s\_i$$] (which we will refer to as the central node of the cluster) using the K-level neighbor aggregation operator. For each state [$$s\_i$$], we count the number of times [$$n\_{C\_i}$$] where a human graph traversal entered cluster [$$C\_i$$] (i.e. a state [$$s\_j \in C\_i$$] is visited after previously visiting a state [$$s\_k \not \in C\_i$$]). If [$$s\_i$$] is also visited when [$$C\_i$$] is entered, the number of visits [$$n\_{s\_i}$$] to the state [$$s\_i$$] is incremented by 1. The ratio [$$n\_{s\_i}/n\_{C\_i}$$] indicates the likelihood of passing through the state [$$s\_i$$] whenever the surrounding region of the graph was entered across all the human graph traversals. 3 Experiments To test our hypotheses for human contact-control strategies, we developed a haptic simulation environment to observe humans performing two insertion-type contact tasks [19]. Our haptic simulation environment uses a right-handed Force Dimension Sigma 7 haptic interface [22], a dynamic proxy controller, and the Open Dynamics Engine for physics [23]. While creating stable simulations is challenging [24–29], we achieved stable contact manipulation by requiring geometric shapes to be composed of sets of cuboids and resolved instabilities by pruning contact state transitions that happened at a frequency associated with solver instabilities. We avoided using massless proxy methods since ignoring inertial effects can introduce a bias in studies of human motion. The tasks we studied were selected to have sufficient complexity to introduce a large number of contact states (see Fig. 1). The experiments consisted of having 11 subjects³ insert L- and S-shaped objects into corresponding holes. [] Fig. 4. Controlled states in L-Object trajectories. A, B, C. Clustering neighboring nodes revealed that only a few contact states were visited regularly whenever a human trajectory entered the state’s cluster, indicating they were controlled ([$$\{c\}$$]). D. Possibly controlled states can be observed by varying the controlled state angular threshold and the controlled state radial threshold. The ideal controlled state is one where the state is visited every time its surrounding cluster is entered. E. There exists a value for the radial threshold where the fraction of states visited regularly was invariant to the level of clustering across varying values of angular controlled threshold used to classify a state as controlled or other ([$$\{o\}$$]). Subjects were pre-trained over 20 trials to apply control forces and moments within the device’s limits, which were indicated with a slider bar on the screen (removed later for data-trials). During the training period, the time taken was displayed, and the longest time taken from the last ten training trials was set as the upper bound. Subjects then performed 50 data trials for each task. The environment pose was randomized to a new configuration for each trial. A success or failure message was displayed in text if the peak force or maximum time was exceeded. Subjects were not provided with any additional instructions on how to perform the task except to attempt to stay below the maximum force limits and perform the task in the allotted time. 4 Results Figure 4(A–C) shows, for each [$$s\_i$$], the number of states in the cluster, the number of visits to the cluster, and the number of visits to state [$$s\_i$$] (the central node of the cluster) for the L-object insertion task accumulated over all of the human subject trials. Observe that for large values of central node visits and number of cluster nodes the density of states is low, and thus the number of controlled states is likely low relative to the total number of visited states. Additionally, the relative locations of the likely controlled states (Fig. 4(A–C)) varied little across the three cluster groupings. Figure 5(A–D) shows similar results on a different task (the S-Object insertion). Interestingly, there is a similar distribution of states as the cluster size is varied. This can be seen more clearly by varying two parameters – the controlled state angular threshold (i.e. the ratio of central node visits to cluster visits) and the controlled state radial threshold (Fig. 4(D)) – and observing how the ratio of controlled states ([$$\{c\}$$]) to other states ([$$\{o\}$$]) changes. The radial parameter removes states that are likely visited due to noise or random exploration. States closer to the ideal controlled line indicate a state that is controlled across all human subjects. The grey region in (Fig. 4(D)) indicates the states that would be considered as controlled for the given value of the controlled state angular and radial thresholds. [] Fig. 5. Controlled states in S-Object trajectories. A, B, C. Similar to the L-Object, clustering neighboring nodes revealed that only a few contact states were visited regularly whenever a human trajectory entered the state’s cluster, indicating they were controlled ([$$\{c\}$$]). D. Possibly controlled states can be observed by varying the controlled state angular threshold and the controlled state radial threshold. The ideal controlled state is one where the state is visited every time its surrounding cluster is entered. E. There exists a value for the radial threshold where the fraction of states visited regularly was invariant to the level of clustering across varying values of angular controlled threshold used to classify a state as controlled or other. Figure 4(E) shows the ratio of controlled states to other as the controlled state angular and radial thresholds are varied for the different levels of clustering. The angular threshold is normalized with respect to the ideal controlled state (i.e. a state which is visited every time its surrounding cluster is entered), and the radial threshold is normalized with respect to the maximum radius of any state across the three levels of clustering. The error bars represent 95th percentile confidence intervals obtained by bootstrapping the ratio of controlled states to other across the given values of controlled state angular and radial thresholds [30]. Note that there exists a value for the controlled state radial threshold where the data points for each of the [$$1,2,3-level$$] clustering on Fig. 4(E) closely coincide across varying values of the controlled states angular threshold. Thus, the fraction of states visited regularly (possibly controlled states) was invariant to the level of clustering. Similar results are observed on the S-object insertion task (Fig. 5(E)). 5 Experimental Insights In summary, our goal was to construct a framework to help map human contact-control demonstrations to a notion of the underlying control strategy. To do so, we developed a definition for a controlled state and a classification method to help categorize whether specific contact states encountered by humans were controlled or not. An open problem in contact control is how to overcome the combinatorial growth in complexity associated with manipulation in contact for non-convex objects. We earlier found that it is unlikely that humans develop control strategies that explicitly rely on contact trajectory planning [19]. In addition, we found that it is also unlikely that humans simply perform random walks using a state-policy. Specifically, our results posit that humans tend to reliably visit a few specific contact states if they enter the state’s vicinity in the graph, which is evidence against h3, yet do not always visit said states (visit probability [$$<40\%$$]), which is evidence against h1 if different humans adopt similar strategies. The remaining possibility within our set of contact-control hypotheses, that humans control to a select subset of states while performing a directed walk along the contact graph, is consistent with our results and should be considered a standing hypothesis for human contact control. Moreover, our haptic demonstration framework and controlled-state classification method can help empirically determine the controlled states while spanning the spectrum of false-positives and false-negatives. These results are invariant to graph aggregation, which serves as a control for stochastic effects. Our results serve as a first step towards systematically studying human contact-control using empirically testable hypotheses. Future research directions include exploring inter-human variations in contact control, testing for invariance to other metrics besides graph aggregation, analyzing the effect of spatial distance across contact states, and modeling the error distribution while trying to control to a specific contact state. Acknowledgements We thank Keegan Go for his assistance with developing the haptic simulation environment. The project was supported by National Science Foundation National Robotics Initiative grant (IIS-1427396, O. Khatib and R. Bajcsy) and a grant from the SAIL-Toyota Center for AI Research at Stanford (O. Khatib). References 1. Schaal, S., Peters, J., Nakanishi, J., Ijspeert, A.: Learning movement primitives. In: Dario, P., Chatila, R. (eds.) Robotics Research. STAR, vol. 15, pp. 561–572. Springer, Heidelberg (2005). doi:10.​1007/​11008941\_​60 2. Khatib, O.: The potential field approach and operational space formulation in robot control. In: Narendra, K.S. (ed.) Adaptive and Learning Systems, pp. 367–377. Springer, New York (1986)CrossRef 3. Ruspini, D., Khatib, O.: Haptic display for human interaction with virtual dynamic environments. J. Rob. Syst. 18(12), 769–783 (2001)CrossRefMATH 4. Mason, M.T.: Compliance and force control for computer controlled manipulators. IEEE Trans. Syst. Man Cybern. 11(6), 418–432 (1981)CrossRef 5. Whitney, D.E.: Historical perspective and state of the art in robot force control. Int. J. Rob. Res. 6(1), 3–14 (1987)CrossRef 6. Hogan, N.: Stable execution of contact tasks using impedance control. In: Proceedings of the 1987 IEEE International Conference on Robotics and Automation, vol. 4, pp. 1047–1054. IEEE (1987) 7. Featherstone, R., Thiebaut, S.S., Khatib, O.: A general contact model for dynamically-decoupled force/motion control. In: ICRA, vol. 4, pp. 3281–3286 (1999) 8. Park, J., Khatib, O.: A haptic teleoperation approach based on contact force control. Int. J. Rob. Res. 25(5–6), 575–591 (2006)CrossRef 9. Wang, D., Zhang, X., Zhang, Y., Xiao, J.: Configuration-based optimization for six degree-of-freedom haptic rendering for fine manipulation. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 906–912 (2011) 10. Ji, X., Xiao, J.: Planning motions compliant to complex contact states. Int. J. Rob. Res. 20(6), 446–465 (2001)CrossRef 11. Xiao, J., Ji, X.: Automatic generation of high-level contact state space. Int. J. Rob. Res. 20(7), 584–606 (2001)CrossRef 12. Kwak, S.J., Chung, S.Y., Hasegawa, T.: Generating a contact state graph of polyhedral objects for robotic application. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4522–4527 (2010) 13. Meeussen, W., Staffetti, E., Bruyninckx, H., Xiao, J., De Schutter, J.: Integration of planning and execution in force controlled compliant motion. Rob. Auton. Syst. 56(5), 437–450 (2008)CrossRef 14. Meeussen, W., Rutgeerts, J., Gadeyne, K., Bruyninckx, H., De Schutter, J.: Contact-state segmentation using particle filters for programming by human demonstration in compliant-motion tasks. IEEE Trans. Rob. 23(2), 218–231 (2007)CrossRef 15. Skubic, M., Volz, R.A.: Acquiring robust, force-based assembly skills from human demonstration. IEEE Trans. Rob. Autom. 16(6), 772–781 (2000)CrossRef 16. Bruyninckx, H., De Schutter, J.: Specification of force-controlled actions in the “task frame formalism”-a synthesis. IEEE Trans. Rob. Autom. 12(4), 581–589 (1996)CrossRef 17. Onda, H., et al.: Assembly motion teaching system using position/force simulator-generating control program. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 938–945, September 1997 18. Gadeyne, K., Lefebvre, T., Bruyninckx, H.: Bayesian hybrid model-state estimation applied to simultaneous contact formation recognition and geometrical parameter estimation. Int. J. Rob. Res. 24(8), 615–630 (2005)CrossRef 19. Klingbeil, E., Menon, S., Go, K.C., Khatib, O.: Using haptics to probe human contact control strategies for six degree-of-freedom tasks. In: IEEE Haptics Symposium (HAPTICS), pp. 93–95 (2014) 20. Kandel, E.R., Schwartz, J.H., Jessell, T.M.: Principles of Neural Science, vol. 4. McGraw-Hill, Health Professions Division, New York (2000) 21. Xiao, J.: Automatic determination of topological contacts in the presence of sensing uncertainties. In: Proceedings of the 1993 IEEE International Conference on Robotics and Automation, vol. 1, pp. 65–70, May 1993 22. Tobergte, A., Helmer, P., Hagn, U., Rouiller, P., Thielmann, S., Grange, S., Albu-Schaffer, A., Conti, F., Hirzinger, G.: The sigma.7 haptic interface for mirosurge: a new bi-manual surgical console. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3023–3030 (2011) 23. Smith, R.: Open dynamics engine (2010) 24. Tan, H.Z., Srinivasan, M.A., Eberman, B., Cheng, B.: Human factors for the design of force-reflecting haptic interfaces. Dyn. Syst. Control 55(1), 353–359 (1994) 25. Ruspini, D., Khatib, O.: A framework for multi-contact multi-body dynamic simulation and haptic display. In: Proceedings of the 2000 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000), vol. 2, pp. 1322–1327. IEEE (2000) 26. McLaughlin, M.L., Hespanha, J.P., Sukhatme, G.S.: Touch in Virtual Environments. Prentice Hall, Upper Saddle River (2002) 27. Salisbury, J.K., Conti, F., Barbagli, F.: Haptic rendering: introductory concepts. IEEE Comput. Graph. Appl. 24(2), 24–32 (2004)CrossRef 28. McNeely, W.A., Puterbaugh, K.D., Troy, J.J.: Six degree-of-freedom haptic rendering using voxel sampling. In: ACM SIGGRAPH 2005 Courses, p. 42. ACM (2005) 29. Kuchenbecker, K., Fiene, J., Niemeyer, G.: Improving contact realism through event-based haptic feedback. IEEE Trans. Vis. Comput. Graph. 12(2), 219–230 (2006)CrossRef 30. Efron, B.: The Jackknife, the Bootstrap and Other Resampling Plans, vol. 38. SIAM, Philadelphia (1982)CrossRefMATH Footnotes 1 In this paper, we ignore contact states consisting of [$$(\{\mathbb {V}\}\_{object} \times \{\mathbb {V}\}\_{environment})$$] and [$$(\{\mathbb {E}\}\_{object} \times \{\mathbb {E}\}\_{environment})$$] where the edges are parallel, since they are degenerate cases.   2 In [10], the contact graph transition function was specifically defined to return a 1 if a given state can be reached by another without losing contact.   3 Human Subjects: Healthy, right-handed subjects with no history of motor disorders: 20m:6’0", 28m:5’9", 31f:5’4", 20m:6’0", 19m:6’0", 20m:5’7", 29m:5’11", 21f:5’2", 32m:5’11", 30m:5’8", 29m:5’8". Informed consent obtained in advance on a protocol approved by Stanford University’s Institutional Review Board (IRB).   Human-Robot Interaction 1 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_26 Hybrid Human Motion Prediction for Action Selection Within Human-Robot Collaboration Ozgur S. Oguz¹  , Volker Gabler¹  , Gerold Huber¹  , Zhehua Zhou¹   and Dirk Wollherr¹   (1) Chair of Automatic Control Engineering, Technical University of Munich, Munich, Germany     Ozgur S. Oguz (Corresponding author) Email: o.oguz@tum.de   Volker Gabler Email: v.gabler@tum.de   Gerold Huber Email: ge.huber@tum.de   Zhehua Zhou Email: zhehua.zhou@tum.de   Dirk Wollherr Email: dw@tum.de Abstract We present a Human-Robot-Collaboration (HRC) framework consisting of a hybrid human motion prediction approach together with a game theoretical action selection. In essence, the robot is required to predict the motions of the human co-worker, and to proactively decide on its actions. For our prediction framework, model-based human motion trajectories are learned by data-driven methods for efficient trajectory rollouts in which obstacles are also considered. We provide the reliability analysis of human trajectory predictions within a human-robot collaboration experimental setup. The HRC scenario is modeled as an iterative game to select the actions for the Human-Robot-Team (HRT) by finding the Nash Equilibrium of the game. Experimental evaluation shows how the proposed prediction approach can be successfully integrated into a game theory based action selection framework. Keywords Human motion predictionHuman-robot collaborationAutonomous action selection 1 Introduction Recently, there have been specific demands from a range of manufacturing industries for the development of cage-free robots that can work in close proximity with humans, as equal team members in a Human-Robot-Team (HRT). Such demands accentuate the importance of highly flexible robots, that can adapt to human actions. Besides the crucial aspect of safety, which we addressed in [1], the focus of this work is set on the improvement of human motion prediction. In order to give an additional insight on the applicability of precise motion predictions to the overall Human-Robot-Collaboration (HRC) behavior, we propose an approach for integrating the prediction method in our adaptive action selection framework. Human upper body motions, specifically the arm motions, are fast and require efficient computation for predicting trajectories. Additionally, as the HRT shares the same workspace, the environment causes further uncertainties in the motions, which has to be considered for trajectory predictions. In that regard, the items unrelated to the current task, as well as, the robot partner should be treated as obstacles for the humans to avoid. Such a motion prediction framework is a key component of safer and natural interaction environment between collaborating partners, as it enables effective action selection, and thus better motion planning for the robot. There are various fields approaching the human motion modeling problem from different viewpoints. Statistical and data-driven approaches focus more on finding representations given a dataset [2, 3], whereas optimization-based methods rely on possible underlying physical quantities together with human kinematics and dynamics constraints [4–6]. Statistical approaches requires training data to discover patterns for different arm motions. In that sense, a rigorous and time consuming data collection process is unavoidable. On the other hand, data-driven approaches, such as Dynamic Movement Primitives (DMPs), require only a minimal training data set [7]. However, integration of the kinematics and the dynamics of the human arm into the problem formulation is not straightforward with DMPs. On the contrary, even though model-based approaches focus more on the underlying mechanisms and the optimality under some kinematics and dynamics objectives for isolated human motion in a specific setting (e.g. planar arm reaching motions), the avoidance behavior has never been the main focus so far. Human-robot collaboration involves interaction between the partners which in turn imposes a requirement for predicting human motions not just in free-space but also during avoidance behavior. Additionally, since the robotic partner is required to anticipate human motions during interaction in a dynamic environment, we need a real-time prediction framework. These are the crucial points where data-driven models excel, but on the downside, just using them alone result in rather poor motion trajectory predictions as we have observed in our preliminary analysis. On the other hand, optimization-based approaches utilize physically-based objective functions related to human kinematic and dynamic models, which enables them to achieve better prediction outcomes for specific settings they are being applied to. However, as they are more computationally intensive, real-time motion prediction requirements cannot be satisfied without sacrificing accuracy. The research presented in this work combines data-driven and model-based approaches to tackle the human motion prediction problem (see Fig. 1). First, an optimization problem is solved with a minimum-jerk model given the human hand position and the possible target position. Then, a DMP system is learned from the model-based trajectory generation. Moreover, since movement primitives are represented as dynamical systems, additional forcing terms can be incorporated in the model. This improves our approach from [1] by enabling the inclusion of dynamic interactions (e.g. obstacle avoidance) into the overall framework for predicting human hand motion trajectories. Once a reliable prediction of the human motion is available, a pro-active action selection and motion adaption of the robot is required for a well-defined collaboration in the HRT. Mainprice et al. [3] use the previously mentioned data-driven approaches to adapt a robot’s motions into regions with decreased risk probability. A similar idea in adapting to the human is shown by Maeda et al. [8] by including the human dynamics in a probabilistic movement primitive framework for the overall HRT while learning collaborative motions. Hawkins et al. [9] on the other hand, focus on temporal prediction of human actions which is then used to choose the most suitable actions for the robot co-worker. Further, Nikolaidis et al. [10] model the sequence of actions for the HRT with a Mixed Observability Markov Model that learns the overall flow of actions and thus adapts to the human’s behavior over time. In contrast to these approaches, we model an HRC scenario from a game theoretic perspective in [11], motivated by the results from previous applications in Human-Robot-Interaction [12–14]. In this work we analyze how human motions can be predicted with respect to a structured environment at an early stage of the motion and how these results can be integrated in the overall collaboration strategy of the HRT. This paper presents the interplay of a novel hybrid prediction framework for human motions, called Hybrid Motion Prediction (HyMP), which is an extension of our previous work [1], and a preliminary game theoretical action selection strategy for the robot to enable collaboration between a human and a robot. The main contributions of this paper are: - We provide a methodology to combine model-based and data-driven approaches for predicting human motion behaviors. - Our framework can take into account the obstacle avoidance behavior of humans. - We provide a parallel implementation of computing human motion predictions for multiple candidate target locations, which is capable of working at interactive rates to allow efficient collaboration schemes within human-robot teams. - An approach of an integrated framework that combines human prediction and action selection is presented. 2 Technical Approach 2.1 Human Motion Prediction In this paper, an approach to efficiently perform human motion predictions during interaction within a Human-Robot-Team (HRT) is presented. This prediction is merged into a game theoretic action selection framework to provide efficient collaboration between partners. Here, we present the steps in our prediction framework to achieve both the integration of obstacle avoidance behavior of humans and also the efficient computation of such trajectories. In our initial studies, just model-based prediction was deployed for predicting human motions in order for the robot to proactively avoid its partner [1]. Specifically, we employed a polynomial fitting that is based on minimum-jerk, which we briefly explain next. Then, we parallelized this method and integrated into the action-selection framework, results of which are presented in this paper. Finally, as we not only want to predict free-space motions but also the avoidance behavior of humans, we developed a hybrid method. As a first step, jerk (third derivative of the human hand position) is minimized to get an initial trajectory towards the target. Second, the generated trajectory is learned by a data-driven system, and then an avoidance behavior is adapted within this data-driven formulation. (a) Minimum-Jerk Based Prediction (MJFit). Flash and Hogan [4] showed that humans follow a smooth trajectory for point-to-point movements by minimizing the jerk. This can be used for trajectory generation. However, trajectory prediction forms the inverse problem i.e. given an observation of a human motion trajectory, compute the coefficients of the model which best matches that segment. The model we used is a fifth order polynomial, that is based on minimum-jerk optimal solution. Hence, we formulate the trajectory prediction as a polynomial fitting problem for which we solve the coefficients. These coefficients are then used to generate the predicted trajectory for the rest of the motion. Finding the coefficients [$${\varvec{q}}$$] of the model can be formulated as an optimization problem [] (1) where [$${\varvec{x}}\_\mathrm {s}$$] and [$${\varvec{x}}\_\mathrm {f}$$] are start and final positions of the movement, respectively, and [$${\varvec{x}}\_i$$] is sampled trajectory at instants [$$t\_i$$]. The full trajectory is [$$\begin{aligned} \hat{{\varvec{x}}}\_i({\varvec{q}},\tau \_\mathrm {mj})={\varvec{x}}\_\text {s}+({\varvec{x}}\_\mathrm {f}-{\varvec{x}}\_\mathrm {s}) \left[ \frac{\tau \_\mathrm {mj}^3}{{(t\_f-t\_s)}^3}, \quad \frac{\tau \_\mathrm {mj}^4}{{(t\_f-t\_s)}^4}, \quad \frac{\tau \_\mathrm {mj}^5}{{(t\_f-t\_s)}^5} \right] {\varvec{q}} \end{aligned}$$] (2) where [$$\tau \_\mathrm {mj}=t\_i-t\_\mathrm {f}$$]. (b) Hybrid Motion Prediction (HyMP). Since obstacles are not considered in our initial MJFit approach, we introduce a two-step hybrid method. First, the trajectory is generated by the classical minimum-jerk model, and then it is learned by a data-driven system. We call the combination of these two steps HyMP from now on. Minimum-jerk. Similar to the MJFit method described above, the minimum-jerk model is used to represent a human-like motion trajectory, but this time from the current position [$${\varvec{x}}\_i$$] to the final position [$${\varvec{x}}\_{\mathrm {f}}$$] without fitting the model to the data observed. Another difference is that we use a bang-bang optimal control policy [15], by minimizing the [$$\mathrm {L}\_\infty $$]-norm of the jerk profile with respect to initial and end conditions on position, velocity and acceleration [] (3) where [$$\mathbf {D}\_\mathrm {mj}$$] is the difference equations matrix. This results in a convex optimization problem which we solve efficiently by CVX, a package for specifying and solving convex programs [16, 17]. Data-driven. Data-based methods for representing human motions are more efficient in terms of computation time of trajectory rollouts once a corresponding representation is learned. In literature, the main usage of data-based approaches has been mostly for learning from demonstration tasks where the intended outcome is to imitate human motions by the robots. However, we would like to exploit the efficiency of these methods in a human motion prediction problem. Having generated the motion trajectory through model-based optimization (Minimum-jerk, see above), in the second step it is learned by the DMP system [$$\begin{aligned} \tau \_\mathrm {dmp}\ddot{y} = \alpha \_z(\beta \_z(g - y) - \dot{y}) + f + p, \end{aligned}$$] (4) where [$$\tau \_\mathrm {dmp}$$] is a time constant, [$$\alpha \_z$$] and [$$\beta \_z$$] are positive constants for the dynamical system, g and y are goal and hand positions, respectively. Obstacles are included as an additional forcing-term p only during online motion prediction [18]. Here, we use the common formulation of DMPs as in [7], where the excitation term f consists of Gaussian basis functions, weights of which are learned by locally weighted regression. [] Fig. 1. System architecture for the proposed motion prediction framework. The focus points of this paper are shaded. 2.2 Human Robot Collaboration Game Based on the definitions of Game Theory [19], our proposed HRC-game is modeled as a finite, non-constant-sum and non-cooperative game with rational players and complete information in the normal form. In particular, it consists of N players [$$P^n$$] with a finite set of actions [$$\mathcal {A}^{n}$$] of size [$$M^{n}$$] each. The goal of the framework is to obtain the Nash-Equilibrium of the game as the action profile [$$\varvec{\pi }^\mathrm {Nash}$$]. The profile assigns a specific action [$$a^{n}\in \mathcal {A}^{n}$$] for each player of the HRT, such that no player can improve the personal pay-off by deviating from the chosen strategy. The pay-off is modeled with a utility function [$${\varvec{u}} = (u^1, \dots , u^N)$$] that maps the impact of an action profile [$$\varvec{\pi }$$] for each player to a numeric value in [$$\mathbb {R} \cup \{-\infty \}$$]. Our initial HRC-model depicts a utility-function [$$u^n$$] as the difference of a planning based reward [$$r^n$$], taking into account the planning context of the collaboration, and an environment sensitive cost-function [$$c^n$$]. As the evaluation of the trajectories connected to each action are emphasized within this paper, we neglect the influence of additional rewards and reflect the utility-functions as a linear combination [$$\begin{aligned} u^n = \underbrace{r^n}\_{\approx 0} -c^n, \quad \mathrm {with}\, c^n = {c}\_{\mathrm {nat}}(a^n) + {c}\_{\mathrm {inter}}\left( a^1,\dots ,a^N\right) , \end{aligned}$$] (5) of native, i.e. player-specific, [$${c}\_{\mathrm {nat}}$$], and interactive costs, [$${c}\_{\mathrm {inter}}$$] that express the impact of actions of multiple players. 3 Experiments In contrast to our experiment evaluated in [11], this experiment includes obstacles within the workspace, as this allows evaluation of the improvements by proposed HyMP approach. In order to emphasize the evaluation of the robot’s and human’s motion, the experiment focuses on a simple pick-and-place scenario in which a human and a KUKA LWR 4+ robot share the same pick-and-place-action set and all actions can be executed independently. For this new set of experiments, a total of 260 trajectories from 12 subjects (3 female, 9 male) are recorded in our study. As depicted in Fig. 2, the HRT is asked to assemble 16 LEGO bricks, from given clustered positions in the edges of a rectangular shaped LEGO plate, into a square structure in the center with predefined goal-positions for each color (yellow, red, blue, green). In between the pick- and place-locations as well as in the middle of the placing area, a total of three obstacles in a trapezoidal shape (pink) have been added to provoke evasive motions by the human participants in particular. The human motion is tracked using eight Qualisys motion capture cameras in order to provide reliable tracking data for the HyMP approach. In addition, the tracking data is fed into an underlying local obstacle avoidance approach from previous work [1] to assure the participant’s safety throughout the entire experiment. As the workspace of the HRT is distinctly dense, a pro-active choice of actions is of utter importance to limit the interference throughout the interaction. As a consequence, the predicted human motions to k goal-points form the human action space [$$\mathcal {A}^{\mathrm {h}} = \{{\varvec{x}}\_1,\dots ,{\varvec{x}}\_k\}$$] for the action-selection framework outlined in Sect. 2.2. [] Fig. 2. HRC experiment setup. Task objectives are 16 LEGO bricks in yellow, red, blue and green (4 each). Start positions are the 4 color clusters in the edges. Target positions are the rectangular shape in the middle. Obstacle objects are depicted in pink. Our evaluation for the motion prediction accuracy and the integration of this human prediction into the game theoretical action selection framework within an HRC setting consists of two stages. In our first study [11], we used MJFit approach online to predict human motion behaviors. We also assessed the accuracy of HyMP approach offline using the data recorded during these initial user studies. Note that, in this initial study, the experimental setup did not include any obstacles within the environment for the human to avoid. Second, we developed an efficient implementation of our HyMP approach that works online and provides the prediction of human motion trajectories to the action selection framework. To test the efficiency and the accuracy of HyMP approach, a second set of experiments were conducted (see Sect. 3). First experimental results have shown that our framework is applicable to an HRC scenario in general, but it is also sensitive to the precision of the underlying human motion prediction as the predicted trajectories are handled as the human action space within our game theoretic framework. As a result, the action-allocation might be retrieved under false assumptions which leads to sub-optimal behavior of the robot respectively. As a consequence, we sought for an improved prediction method for an overall improved collaboration and conducted a second preliminary study on reaching motions in the existence of static obstacles. Each participant was asked to pick up items located on predefined locations around a table. Barriers were positioned in front of some targets to provoke evasive movements (see Fig. 2). 4 Results In both experiments, the online prediction framework generates four possible trajectories corresponding to four possible target locations. There are two cases which we need to consider; (a) after a brick is picked up from its cluster in the edge, (b) after a brick is placed in one of the center locations according to its color. In case a, four predictions are computed based on the possible placing locations. In Fig. 3, those four predictions can be seen as one green and three blue trajectories. Green represents the best fit trajectory compared to the human motion recordings. In case b, four predictions are made from the placing position to the four cluster centers in different corners of the table. [] Fig. 3. Example of Hybrid Motion Prediction during an experiment from the robot’s perspective. Colored dots mark the start and goal positions of the target LEGO bricks. Obstacles are colored in pink. Blue lines depict predictions at movement start to all assumed target locations, the green line is the best fit according to DTW post-analysis, and the red line shows the actual movement as it was recorded. We compare the HyMP method with the MJFit approach by testing the reliability of the trajectory prediction methods using a dynamic-time-warping (DTW) analysis [20]. The HyMP approach outperforms MJFit in terms of mean DTW distance values. Clearly, taking into account the obstacle avoidance effect within the HyMP method provides more accurate motion predictions. As it can be seen in Fig. 4 (yellow), MJFit’s prediction reliability decreases when the subject approaches [$${\varvec{x}}\_\mathrm {f}$$], especially in the first HRC experiment. The reason lies in data over-fitting, particularly if the subject’s motion can not be represented by the model accurately. As a consequence, the predicted trajectory overshoots and results in an increased mean DTW distance value. We also observe a slight increase in distance error for HyMP in the first experiment but it is not statistically significant. In contrast, the HyMP’s prediction accuracy increases towards [$${\varvec{x}}\_\mathrm {f}$$] in the second experiment. The statistical analysis of the 260 recorded trajectories shows that there is a significant difference between the mean DTW-based distance accuracy of the two methods once at least half of the motion is executed (p-value < 0.001 if more than 50% of the trajectory is considered). [] Fig. 4. Results for the accuracy of the two prediction methods MJFit and HyMPfor two experimental setup, with and without obstacles. y-axis shows the percentage of executed trajectory after which we compute future predictions. x-axis shows the mean DTW distance between the predicted and the recorded trajectory averaged over 260 motion trajectories executed by 12 subjects. The MJFit and HyMP approaches we developed, and compared in this paper are computationally efficient enough to enable the integration of prediction of human motion behavior into the action selection framework within the aforementioned HRC settings. Both of them are parallelized by using ROS framework [21]. As there is a prediction for each possible final position, these individual motion prediction computations are run as separate ROS nodes. In that sense, the framework can easily be extended to allow computations of human motion predictions for more possible target locations in parallel. In our analysis, a prediction frequency of 10 Hz for MJFit and 20 Hz for HyMP can be reached on average. 5 Conclusion and Future Work In this paper a novel and computationally efficient human motion prediction framework is presented which can take into account the possible avoidance behaviors of humans in the existence of obstacles. It is also integrated within a novel game theoretical action selection framework for HRC settings. Fast computation of human hand motion trajectories provides the foundation for safer and natural interaction between a human and a robot. We tested our methods with user studies for their accuracy. Next steps include analysis of the effects of accurate human motion prediction on the action selection framework. Moreover, the obstacle avoidance term within the DMP formulation can be extended or replaced with more accurate models. Further user studies can be devised in order to record human avoidance behavior under different scenarios which in turn enables modeling of such behaviors more accurately. References 1. Dinh, K.H., Oguz, O., Huber, G., Gabler, V., Wollherr, D.: An approach to integrate human motion prediction into local obstacle avoidance in close human-robot collaboration. In: International Workshop on Advanced Robotics and its Social Impacts (ARSO). IEEE, pp. 1–6 (2015) 2. Koppula, H.S., Saxena, A.: Anticipating human activities using object affordances for reactive robotic response. Trans. Pattern Anal. Mach. Intell. 38(1), 14–29 (2016)CrossRef 3. Mainprice, J., Berenson, D.: Human-robot collaborative manipulation planning using early prediction of human motion. In: International Workshop on Intelligent Robots and Systems (IROS). IEEE, pp. 299–306 (2013) 4. Flash, T., Hogan, N.: The coordination of arm movements: an experimentally confirmed mathematical model. J. Neurosci. 5(7), 1688–1703 (1985) 5. Kawato, M.: Internal models for motor control and trajectory planning. Current Opin. Neurobiol. 9(6), 718–727 (1999)CrossRef 6. Harris, C.M., Wolpert, D.M.: Signal-dependent noise determines motor planning. Nature 394, 780–784 (1998)CrossRef 7. Ijspeert, A.J., Nakanishi, J., Hoffmann, H., Pastor, P., Schaal, S.: Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput. 25(2), 328–373 (2013)MathSciNetCrossRefMATH 8. Maeda, G., Ewerton, M., Lioutikov, R., Amor, H.B., Peters, J., Neumann, G.: Learning interaction for collaborative tasks with probabilistic movement primitives. In: International Conference on Humanoid Robots. IEEE, pp. 527–534 (2014) 9. Hawkins, K.P., Bansal, S., Vo, N.N., Bobick, A.F.: Anticipating human actions for collaboration in the presence of task and sensor uncertainty. In: International Conference on Robotics and Automation (ICRA). IEEE, pp. 2215–2222 (2014) 10. Nikolaidis, S., Lasota, P., Ramakrishnan, R., Shah, J.: Improved human-robot team performance through cross-training, an approach inspired by human team training practices. Int. J. Robot. Res. 34(14), 1711–1730 (2015)CrossRef 11. Gabler, V., Stahl, T., Huber, G., Oguz, O., Wollherr, D.: A game-theoretic approach for adaptive action selection in close distance human-robot-collaboration. In: International Conference on Robotics and Automation (ICRA). IEEE (submitted, 2016) 12. Li, Y., Tee, K.P., Chan, W.L., Yan, R., Chua, Y., Limbu, D.K.: Role adaptation of human and robot in collaborative tasks. In: International Conference on Robotics and Automation (ICRA). IEEE, pp. 5602–5607 (2015) 13. Jarrassé, N., Charalambous, T., Burdet, E.: A framework to describe, analyze and generate interactive motor behaviors. PloS One 7(11), e49945 (2012)CrossRef 14. Turnwald, A., Althoff, D., Wollherr, D., Buss, M.: Understanding human avoidance behavior: interaction-aware decision making based on game theory. Int. J. Soc. Robot. 8(2), 331–351 (2016)CrossRef 15. Yazdani, M., Gamble, G., Henderson, G., Hecht-Nielsen, R.: A simple control policy for achieving minimum jerk trajectories. Neural Netw. 27, 74–80 (2012)CrossRef 16. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming, version 2.1. http://​cvxr.​com/​cvx 17. Grant, M., Boyd, S.: Graph implementations for nonsmooth convex programs. In: Blondel, V., Boyd, S., Kimura, H. (eds.) Recent Advances in Learning and Control. LNCIS, vol. 371, pp. 95–110. Springer, Heidelberg (2008). http://​stanford.​edu/​~boyd/​graph\_​dcp.​html 18. Hoffmann, H., Pastor, P., Park, D.H., Schaal, S.: Biologically-inspired dynamical systems for movement generation: automatic real-time goal adaptation and obstacle avoidance. In: International Conference on Robotics and Automation (ICRA). IEEE, pp. 2587–2592 (2009) 19. Leyton-Brown, K., Shoham, Y.: Essentials of game theory: a concise multidisciplinary introduction. Synth. Lect. Artif. Intell. Mach. Learn. 2(1), 1–88 (2008)CrossRefMATH 20. Kruskall, J., Liberman, M.: The symmetric time warping algorithm: From continuous to discrete. In: Time Warps, String Edits and Macromolecules (1983) 21. Quigley, M., Conley, K., Gerkey, B.P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software (2009) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_27 Design and Control of Lightweight Supernumerary Robotic Limbs for Sitting/Standing Assistance Laura Treers^(1, 2  ), Roger Lo^(1, 2  ), Michael Cheung^(1, 2  ), Aymeric Guy^(1, 2  ), Jacob Guggenheim^(1, 2  ), Federico Parietti^(1, 2  ) and Harry Asada^(1, 2  ) (1) Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA (2) Department of Mechanical Engineering, EPFL, Lausanne, Switzerland     Laura Treers Email: ltreers@mit.edu URL: http://darbelofflab.mit.edu   Roger Lo Email: rdlo@mit.edu URL: http://darbelofflab.mit.edu   Michael Cheung Email: mcheung@mit.edu URL: http://darbelofflab.mit.edu   Aymeric Guy Email: aymeric.guy@epfl.ch URL: http://darbelofflab.mit.edu   Jacob Guggenheim Email: jguggenh@mit.edu URL: http://darbelofflab.mit.edu   Federico Parietti (Corresponding author) Email: parietti@mit.edu URL: http://darbelofflab.mit.edu   Harry Asada Email: asada@mit.edu URL: http://darbelofflab.mit.edu Abstract We present a new, lightweight prototype of the Supernumerary Robotic Limbs (SRL), a wearable robot that augments its user by providing two extra robotic legs. We then showcase the robot’s assistive capabilities by developing and implementing a control strategy that supports the user during sitting and standing motions. The reduced mass and volume of the robot are enabled by innovative design choices including advanced materials, efficient joint structure, and high-performance pneumatic actuation. The assistive control strategy is tailored to each individual based on their motion preferences, and allows the SRL to support users without getting in the way of their movements. The proposed assistive strategy is implemented and validated in experiments with the physical SRL prototype. Keywords Wearable roboticsHuman augmentationRehabilitation roboticsPhysical human-machine interaction 1 Introduction Traditional wearable robots are modeled after the natural anatomy of the human body, either assisting human joints by applying torques in parallel with them (exoskeletons and orthoses) or replacing missing limbs by replicating their function and shape (prostheses) [1–5]. We have introduced and developed a new type of wearable robot - the Supernumerary Robotic Limbs (SRL) - that provide a human user with additional robotic limbs that are kinematically independent from the natural arms and legs [6, 7]. Since the SRL is not constrained to follow the motion of the human limbs (Fig. 1), it can optimize its behavior to assist users without getting in the way of their voluntary motions [8–10]. [] Fig. 1. (Left, Center) Wearing the robot prototype. The SRL is a wearable robot that assists without getting in the way of the user. Kevlar parts are light yellow, while carbon fiber parts are black. (Right) The task that we consider in this paper. The SRL assists a user while sitting and standing. Each robotic limb has two rotational degrees of freedom at its base, and a prismatic one along its length. Having extra robotic limbs would be beneficial in several applications. In a manufacturing scenario [11], the SRL can assist workers by compensating for their weight while standing for extended periods of time, or while working in uncomfortable positions (for example, operating on the floor or ceiling). Another important application of the SRL consists of assisting elderly or rehabilitating people in performing everyday tasks that require significant joint torques, such as sitting down and standing up [12]. In this paper we describe the design and manufacturing of an entirely new prototype for the SRL, capable of supporting the full weight of the user with a total mass of less than 4 kg. This lightweight prototype was achieved through the use of innovative materials, joint designs, and actuation choices. We also develop a data-driven control strategy that enables the SRL to assist a user while sitting down and standing up, producing support forces that decrease the effort required from the natural joints. We then implement this control strategy on the robot prototype, demonstrating its assistive capabilities. 2 Robot Design From the point of view of the design and manufacturing of the robot prototype (Fig. 1), our approach is innovative in three main areas. First, the prototype’s structure is entirely made of parts 3D printed using kevlar and carbon fiber, with nylon as filler between the fibers. The manufacturing process was enabled by a collaboration with MarkForged, Cambridge, MA. This technology combines the advantages of 3D printing (custom, complex parts and rapid iterations) with the mechanical strength and light weight of a composite structure. It enabled a 61% weight reduction with respect to the previous prototype of the SRL, which was realized in alumnimum [9]. Second, the two rotational degrees of freedom at the base of the robot are combined into a single ball and socket joint. The ball and socket (made of carbon fiber) absorb all of the forces coming from the robotic legs, so that the servomotors actuating them do not need additional shafts or bearings. Moreover, axial forces coming from the robotic limbs pass through the center of the ball joint, and do not generate any torque. This means that the linear actuators and the structure of the robot bear the weight of the user, while the servomotors are only used to move the limbs before contact and to compensate for disturbances. Third, the prismatic degrees of freedom are actuated by pneumatic cylinders (model: custom, manufacturer: Numatics, USA). These actuators, controlled by simple on/off valves (model: VUVG-L10-P53C-T-M5-1P3, manufacturer: Festo), produce enough force and move fast enough to provide assistance on a wide range of tasks, from weight support to balance support. Simple control laws allow us to control the position and force of the pneumatic cylinders with sufficient precision for locomotion tasks (see Fig. 5). [] Fig. 2. Model developed of the human-robot system, with length and angle parameters. 3 Control Strategy 3.1 Modeling and Analysis of the Sitting/Standing Motion We recorded the human movements involved in the sitting and standing processes using the Kinect motion sensing system. We then created a rigid-body 2D model (in the sagittal plane) of the human-robot system (Fig. 2), and used it to compute the torques that are generated at each joint during the sitting and standing motions (Fig. 3). In the model, the human ankle, knee, and hip are represented by points A, K, and H respectively. Since the SRL is worn through a belt-like harness, its assistive forces are applied to the human hip (point H). [] Fig. 3. (Left) Human joint angles calculated from 10 trials with the Kinect sensors for sitting followed by standing. (Right) Torques at each joint calculated from the same Kinect data and our kinematic model. The red lines represent the mean, and the shaded yellow areas represent the standard deviation. In order to assist the user while sitting down and standing up, the robotic limbs extend backwards and make contact with the ground (at a distance d from the human ankle). The human and robotic legs on both sides of the body have the same configuration. Therefore, they overlap in the sagittal plane representation of the sitting/standing motion. The robotic limbs can only exert an axial force, [$$F\_R$$], because they have a single point of contact with the ground and a passive end effector. The end effectors of the robot are provided with a compliant wrist joint that allows them to adapt to the orientation of the ground. The base of the end effector is made of soft rubber. This solution provides high static friction and reduces the risk of slipping. In our model (Fig. 2) we assume that the human mass is concentrated at the Center Of Mass (COM) in the lower torso, and that the mass of the robot is negligible (its weight is 3.5 kg, or about 4% of the mass of a typical user). We also assume there is no slip between the robot and its contact points with the ground, and that the motion is completed slowly enough that static laws apply. This is consistent with the slow pace of motion during manufacturing tasks, and during everyday movements for elderly or rehabilitating subjects. Figure 3 shows the human joint torques required to sit and stand without any external support (unassisted case). In the ideal assistive approach, the robot force fully compensates for the human weight in the vertical direction (Fig. 2). This allows us to determine the ideal robot axial force [$$F\_R$$] required in every kinematic configuration. Its equation is [$$F\_R = mg/\cos \alpha $$], where [$$\alpha $$] is the angle between the robotic legs and the direction of gravity (Fig. 2). Considering this assistive force for the robot, we are able to compute the human joint torques required to sit and stand when the user is supported by the SRL (assisted case). Applying the model, we find the following equations for the assisted and unassisted joint torques: [$$\begin{aligned} \begin{matrix} \tau \_{unassisted} = J\_{human}^T \begin{bmatrix} 0 \\ mg \end{bmatrix} \\ \\ \tau \_{assisted} = \tau \_{unassisted} + J\_{robot}^T \begin{bmatrix} mg \tan (\alpha ) \\ -mg \end{bmatrix} \end{matrix} \end{aligned}$$] where [$$J\_{human}$$] represents the Jacobian matrix of the human body from the ankle joint A to the COM, and [$$J\_{robot}$$] represents the Jacobian matrix of the human body from the ankle joint A to the hip joint H, where the robot’s assistive force is applied. 3.2 Assistive Control Strategy Based on motion data gathered with the Kinect sensor, we found the joint and torque trajectories as a function of time for a specific human subject. Figure 3 displays the joints’ position and torque when the subject is sitting down and standing up without external help (unassisted case). In our assistive control strategy, the robot follows the natural trajectory of the human (Fig. 3, left plots) but provides an assistive force - applied to the hip of the user - that compensates for the human’s weight (Fig. 2). We are therefore able to compute the torques that the human joints have to generate when the user is supported by the robot (assisted case). Comparing the human joint torques in the assisted and unassisted cases (Fig. 4, left), it is possible to notice that in some configurations the assisted values exceed the unassisted ones. This happens when the robotic limbs are far from the vertical orientation (large values of [$$\alpha $$], Fig. 2), and have to apply significant axial forces in order to compensate for the full weight of the user. This results in the SRL applying a large horizontal disturbance to the hip of the user, that the natural joints are then forced to absorb. In these cases we modified the control law, reducing the robot force [$$F\_R$$] such that the absolute value of the assisted torques never exceeds the unassisted case. The result, displayed in Fig. 4, is a force law that varies [$$F\_R$$] as a function of the configuration of the human. When the user is sitting ([$$\beta $$] is close to zero), the robot supports his/her full weight by generating a significant axial force [$$F\_R$$]. On the other hand, when the human gets closer to the standing position ([$$\beta $$] approaching 90 [$$^\circ $$]), the axial assistive force [$$F\_R$$] is gradually decreased in order to avoid pushing the user forward in the horizontal direction. This ensures that the human joint torques in the assisted case are always smaller than or equal to those in the unassisted case. [] Fig. 4. (Left) Torques at the knee, comparison of the assisted and unassisted cases. Using the ideal support force [$$F\_R=mg/ \cos \alpha $$], the joint torques in the assisted case can exceed the unassisted ones. (Right) Optimized robot force profile, which compensates for the full weight of the user only when this reduces the human joint torques. When this does not hold, the robot support force [$$F\_R$$] is reduced until the assisted torques are smaller or equal than the unassisted ones. [] Fig. 5. (Left) Using a wearable accelerometer (circled in red) to measure knee angle [$$\beta $$] and generate the target robot assistive force in real time. (Center) Knee angle [$$\beta $$] as a function of time during a stand up / sit down task, and corresponding robotic assistive force [$$F\_R$$]. (Right) Position tracking performance of the pneumatic cylinders. This sample trajectory has been designed to showcase the control performance, and does not represent an actual sitting/standing motion (see Figs. 6 and 7). In order to compute the control strategy shown in Fig. 4, we need to measure the trajectories of all the joints of the human legs during the sitting and standing motions (Fig. 3). This process needs to be executed only once, in order to tailor the assistive force law to the particular motion preferences, kinematic structure, and weight of a user. Once the force law is computed (Fig. 4, right), we can pick one joint angle to track the progress of the user along the sitting/standing trajectory, and to associate the pre-computed assistive force [$$F\_R$$] to that configuration. Since the knee angle shows the largest variation during the measured motions (Fig. 3), we select [$$\beta $$] as the variable that we use to monitor the configuration of the human. The goal for the robot force law outlined above is to be able to map the knee angle [$$\beta $$] to the corresponding robot force [$$F\_R$$] as the user sits down and stands up while wearing the robot. In order to measure the [$$\beta $$] in real-time, we used an accelerometer attached to a soft strap worn on the thigh of the user. These sensors generated accurate real-time measurements of [$$\beta $$], while ensuring user comfort and without getting in the way of the natural movements. Moreover, this solution is completely wearable and does not require any external sensors. The generation of the target assistive force [$$F\_R$$] in real-time is shown in Fig. 5. Notice that since the robot’s assistive force depends on the configuration of the user (angle [$$\beta $$]), the robot naturally follows any user motion without imposing any pre-defined trajectory in time. 4 Experimental Results We implemented the control strategy described above, demonstrating its effectiveness in a series of experiments using the SRL prototype (Fig. 6). A human subject wore the robot through its hip harness, and an accelerometer through a strap around his thigh. The subject was 1.85 m tall, and his weight was 96 kg. The subject started the experiment in the standing position, with his feet parallel and positioned below the shoulders. The robotic legs extended backwards, making contact with the ground at a distance of [$$d=0.2$$] m from the ankles of the subject. The end effectors of the robot were placed on a clean surface to avoid the risk of slipping. The rotational actuators, placed at the base of the robotic arms, were left free to be backdriven by the motions of the user. Since the configuration of the robot was symmetric with respect to the sagittal plane and the forces coming from the robotic limbs were axial, the servomotors did not need to generate compensation torques. The weight of the user was transmitted to the robotic limbs thorugh the carbon fiber ball and socket joints in the base of the SRL. The position of the pneumatic cylinders was controlled using 5/3 pneumatic valves able to extend or retract the robotic limbs, and whose default configuration locked the cylinders in place. The position of the prismatic joints was measured using a magnetic potentiometer. The control algorithm was sent a reference position, and then opened the pneumatic valve in the correct direction (extension or retraction) for an amount of time proportional to the current error in the position. This process ran continuously on the microcontroller located in the base of the robot, so that the pneumatic cylinders quickly converged to the desired position (Fig. 5). In each trial, the thigh accelerometer measured the knee angle [$$\beta $$], which was then used to compute the required assistive force [$$F\_R$$] for the subject. The intrinsic stiffness of the pneumatic cylinders was measured experimentally ([$$k=1258.4$$] Nm), and employed to calculate the desired cylinder displacement [$$\varDelta x = F/k$$]. The displacement of the cylinder is given by [$$\varDelta x = x - x\_0$$], where [$$x\_0$$] is the undisturbed length of the cylinder (which can be computed, since we know in every instant the kinematic configuration of the subject) and x is the required position of the cylinder. Once x is computed, it is sent as a desired position command to the pneumatic cylinders, allowing them to generate the desired force [$$F\_R$$] under the quasi-static conditions of the experiment (the sitting/standing motions are slow, lasting 8 s on average). [] Fig. 6. Experimental validation of the assistive strategy. (A) A subject wears the SRL and an accelerometer strapped to the thigh. Based on the real-time measurement of the knee angle [$$\beta $$], we compute the value of the robot force [$$F\_R$$] and send the corresponding command to the pneumatic cylinders. (B), (C) The values of [$$\beta $$] and [$$F\_R$$], recorded during a representative trial. The robot provides full support when the user sits down (small values of [$$\beta $$]) and decreases the assistive force as the user rises ([$$\beta $$] close to 90 [$$^\circ $$]). The experimental setup and a representative trial are displayed in Fig. 6. In the experiment the subject performed multiple sitting and standing motions, while the robotic limbs provided assistive forces based on the measurement of the knee angle (Fig. 7). The same force command was issued to both robotic legs, and the configuration of the human-robot system was symmetric with respect to the sagittal plane. By applying the assistive strategy described in this study, the SRL was able to support the sitting/standing motions of the user, compensating for part of his weight without getting in the way of his natural movements. This is a unique feature of the SRL, a new kind of wearable robot that augments by providing extra limbs rather than attaching actuators in parallel with the natural joints. Moreover, the robot was able to adapt to the motions of the user without forcing him to follow any pre-programmed trajectory in time. This is evident in Fig. 6, where the third sitting/standing motion is clearly different from the others. The SRL seamlessly adapts to the different motion, generating the assistive force that best supports the user in that particular situation. This robust, seamless coordination with the user is enabled by a control strategy that is based on the kinematic configuration of the subject, rather than on a pre-defined timing or trajectory. This assistive strategy was robust and repeatable over multiple trials, as shown in Fig. 7. [] Fig. 7. Human motion and robotic force recorded during the sitting/standing experiments with SRL assistance. (Top) The trajectory of the user’s knee angle [$$\beta $$]. (Bottom) The value of the robot force [$$F\_R$$]. The red lines represent the mean over 10 trials, while the yellow area represents the standard deviation. The assistive strategy is capable of reliably following the motions of the user, providing the required assistive force. 5 Conclusions The support strategy described in this paper is made possible by the light weight, small volume, and high comfort of our prototype. These features have been achieved with innovative choices in terms of manufacturing technologies and materials (3D printed Kevlar and carbon fiber parts), joint structure (composite ball joints absorbing linear loads instead of standard metal shafts and bearings), and actuation technologies (high-force, high-speed pneumatic cylinders with a position control algorithm). Using the Kinect sensor to analyze the sitting down/standing up motions of different individuals, we found that the optimal robot force profile varies from person to person. Our method involves recording multiple trials with for each subject to identify their individual control law. This force profile is then used to determine in real time the robot’s assistive force [$$F\_R$$] as we measure the user’s knee angle [$$\beta $$] with an accelerometer. The proposed assistive strategy was implemented and validated in experiments with the physical SRL prototype. This assistive approach follows the individual’s preferences in terms of sitting down and standing up motions, and does not impose any pre-planned time trajectory on the human joints. In other words, the robot follows the configuration of its user by measuring the knee angle in real-time. The support force is then computed based on the user’s configuration, so that the assistive needs of the user are always met. This assistive approach enables the SRL to effectively support the user when sitting/standing. It also exemplifies a unique feature of this novel kind of wearable robot: the ability to assist users without getting in the way of their motions. References 1. Dollar, A., Herr, H.: Lower extremity exoskeletons and active orthoses: Challenges and state-of-the-art. IEEE Trans. Rob. 24(1), 144–158 (2008)CrossRef 2. Bogue, R.: Exoskeletons and robotic prosthetics: a review of recent developments. Ind. Rob. Int. J. 36(5), 421–427 (2009)CrossRef 3. Kawamoto, H., Sankai, Y.: Power assist system HAL-3 for gait disorder person. In: Miesenberger, K., Klaus, J., Zagler, W. (eds.) ICCHP 2002. LNCS, vol. 2398, pp. 196–203. Springer, Heidelberg (2002) 4. Ferris, D.P., Czerniecki, J.M., Hannaford, B., U. of Washington: An ankle-foot orthosis powered by artificial pneumatic muscles. J. Appl. Biomech. 21(2), 189 (2005)CrossRef 5. Kuiken, T.A., Li, G., Lock, B.A., Lipschutz, R.D., Miller, L.A., Stubblefield, K.A., Englehart, K.B.: Targeted muscle reinnervation for real-time myoelectric control of multifunction artificial arms. JAMA 301(6), 619–628 (2009)CrossRef 6. Parietti, F., Asada, H.: Dynamic analysis and state estimation for wearable robotic limbs subject to human-induced disturbances. In: Proceedings of IEEE International Conference on Robotics and Automation (2013) (in press) 7. Parietti, F., Asada, H.: Supernumerary robotic limbs for aircraft fuselage assembly: body stabilization and guidance by bracing. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 1176–1183 (2014) 8. Parietti, F., Chan, K., Asada, H.: Bracing the human body with supernumerary robotic limbs for physical assistance and load reduction. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 141–148, May 2014 9. Parietti, F., Chan, K.C., Hunter, B., Asada, H.: Design and control of supernumerary robotic limbs for balance augmentation. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), May 2015 10. Parietti, F., Asada, H.: Supernumerary robotic limbs for human body support. IEEE Trans. Rob. PP(99), 1–11 (2016) 11. Brown, T.M.: Injuries, illnesses and fatalities in manufacturing, 2005. Bureau of Labor Statistics, July 2007 12. Janssen, W.G., Bussmann, H.B., Stam, H.J.: Determinants of the sit-to-stand movement: a review. Phys. Ther. 82(9), 866–879 (2002) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_28 Integrated Intelligence for Human-Robot Teams Jean Oh¹  , Thomas M. Howard², Matthew R. Walter³, Daniel Barber⁴, Menglong Zhu⁵, Sangdon Park⁵, Arne Suppe¹, Luis Navarro-Serment¹, Felix Duvallet⁷, Abdeslam Boularias⁶, Oscar Romero¹, Jerry Vinokurov¹, Terence Keegan⁹, Robert Dean⁹, Craig Lennon¹⁰, Barry Bodt¹⁰, Marshal Childers¹⁰, Jianbo Shi⁵, Kostas Daniilidis⁵, Nicholas Roy⁸, Christian Lebiere¹, Martial Hebert¹ and Anthony Stentz¹ (1) Carnegie Mellon University, Pittsburgh, Pennsylvania, USA (2) University of Rochester, Rochester, New York, USA (3) Toyota Technological Institute at Chicago, Chicago, Illinois, USA (4) University of Central Florida, Orlando, Florida, USA (5) University of Pennsylvania, Philadelphia, Pennsylvania, USA (6) Rutgers University, New Brunswick, New Jersey, USA (7) Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland (8) Massachusetts Institute of Technology, Cambridge, Massachusetts, USA (9) General Dynamics Robotic Systems, Westminster, Maryland, USA (10) U.S. Army Research Laboratory, Adelphi, Maryland, USA     Jean Oh Email: jeanoh@nrec.ri.cmu.edu Abstract With recent advances in robotics technologies and autonomous systems, the idea of human-robot teams is gaining ever-increasing attention. In this context, our research focuses on developing an intelligent robot that can autonomously perform non-trivial, but specific tasks conveyed through natural language. Toward this goal, a consortium of researchers develop and integrate various types of intelligence into mobile robot platforms, including cognitive abilities to reason about high-level missions, perception to classify regions and detect relevant objects in an environment, and linguistic abilities to associate instructions with the robot’s world model and to communicate with human teammates in a natural way. This paper describes the resulting system with integrated intelligence and reports on the latest assessment. 1 Introduction As robots become commonplace in a variety of domains ranging from manufacturing to the military, there has been growing interest in the development of intelligent robots that can support humans not only as tools, but also as teammates. To be a competent teammate, e.g., to perform a screening mission illustrated in Fig. 1, a robot needs to have basic cognitive abilities including perceiving the semantics of its environment, reasoning about spatial relationships, and communicating with natural language. In this context, while the subfields of robotics and artificial intelligence have been extensively evaluated according to standard metrics accepted within each research community, little work has been done to-date that gauges the current state-of-the-art for an intelligent robot with cognitive abilities. For example, the computer vision community has mainly focused on improving performance on benchmark data sets as opposed to addressing the types of real world challenges faced in robotics [12, 20]. As a result of such disconnections, the majority of existing works in intelligent (or cognitive) robotics includes simplifying assumptions, e.g., ideas are verified in simulated environments or a robot’s perception is assumed to be perfect or is simplified in order to measure the intelligence without including errors due to imperfect perception [6, 9, 10, 17, 21]. In our work, we aim to assess where the technology stands and where technology gaps are in the development of an intelligent robot teammate by integrating various pieces of technologies needed for a robot to perform tactical behaviors autonomously without adding simplifying assumptions. In this paper, we focus on semi-urban outdoor navigation and search behavior. [] Fig. 1. An example showing a Clearpath[$$^\mathrm{TM}$$] Husky unmanned ground vehicle working with a human teammate on a screening mission in an unknown environment. Toward this goal, we develop an intelligence architecture and integrate relevant technologies including state-of-the-art perception modules on a robot platform to assess robot intelligence at the tactical behavior level. Specifically, the capabilities that have been integrated to support intelligence are the following: (1) multi-modal interface to support rich interaction with humans,\* (2) semantic world model, (3) high-level mission planning, (4) object detection,\* (5) door detection,\* (6) human detection and tracking,\* (7) scene classification, (8) building (stuff) detection, (9) object prediction beyond sensor ranges, (10) natural language grounding,\* (11) object symbol grounding, (12) (global and local) path planning, (13) imitation learning for navigation modes, and (14) an interaction layer for mobile robots. We note that the architecture builds on our prior work [19], augmented with new capabilities (marked with \*). We describe our approach and share the lessons we have learned from recent assessment. [] Fig. 2. An architectural diagram of integrated intelligences for human-robot teams. 2 Technical Approach Figure 2 shows an architectural diagram of our intelligence system for a robot teammate. In this section, we briefly illustrate how various modules contribute and interact within this architecture to support high-level robot intelligence. Because our goal is focused on robots that can work with humans, it is important that robots be able to communicate in ways that are natural and efficient to humans. In our system, the interaction between a robot and a human is supported by a Multi-Modal Interface (MMI). Using this interface, a human teammate can issue commands through natural language speech and hand gestures, and review the robot’s reasoning process via annotated camera images and semantic maps. The world model is a central storage of information that is accumulated and merged from various modules. The information stored in a world state includes robot pose data, sensor data, semantic objects, multi-layered cost maps, commands, and various action status. The world model supports a query interface for the modules to look up relevant information. The mission planner takes a command and reasons about pre- and post-conditions of available actions to find a plan that will accomplish the task specified by the command. For instance, given a command “Screen the back of the building,” a set of actions needs to be performed in a sequential order; i.e., the robot needs to navigate to the back of the building, locate a door in the back of the building, and then monitor the area near the door to report upon anyone’s egress from the building. The core of the intelligence system consists of perception, prediction and language understanding. These units contribute to the robot’s understanding of its environment and enable it to interpret and execute a given natural language command. The perception module translates the raw data from the robot’s sensors into semantically meaningful information (e.g., semantic scene classifier, an object detector, a door detector, and a human detector). The prediction module enables the robot to infer a world model for the unseen parts of the environment, effectively compensating for limitations in the robot’s sensing range (as well as possible perception errors) by using prior information about object models or descriptions of objects specified in the natural language command. The language understanding module translates a spoken utterance into a structured representation, known here as Tactical Behavior Specification (TBS) [19], that formally represents the task and its constraints, and computes symbol grounding results [3]. Combined together, these modules enable the robot to robustly perform complex tasks in an unknown environment. 2.1 Human-Robot Interface (HRI) The effectiveness of human-robot teams is intrinsically linked to the efficiency of bi-directional communication. Robots must be able to transform human forms of expression (e.g., language and gesture) into a meaningful representation and communicate their understanding and actions to humans in order to share a cognitive model of mission goals and objectives. To address these challenges, we developed a MMI based on a Toughpad FZ-M1 tablet (Fig. 3). This device enables a human teammate to command the robot through a combination of speech and gestures and receive robot status from the visual display and auditory cues. The MMI represents instructions to the intelligence architecture using the TBS lexicon. [] Fig. 3. An illustration of the Multi-Modal Interface (MMI) for human-robot interaction. The MMI accepts input in the form of speech and/or gesture and visualizes the state of the intelligence architecture. The MMI Visual Display illustrates a “screen the back of the building” command. The robot status shown in the COMMANDS and STATUS sections indicate the command is still running and the robot is currently searching. Grounding natural language to a TBS in the MMI is performed by the Hierarchical Distributed Correspondence Graph (HDCG) [2, 4, 11]. This model searches a pair of graphical models to efficiently translate natural language into a TBS command. The first graphical model is used to infer a set of rules to construct a more efficient representation of a second graphical model that is used to infer a distribution of the physical meaning of each phrase. To characterize the performance of the HDCG in this application, we measured the average run-time of symbol grounding for the natural language expression “screen the back of the building that is behind the car.” Over 100 queries on a MacBook Pro with a 2.6 GHz Intel Core i7 processor, we observed that the model required 0.131 s on average to correctly translate the expression to a valid TBS command. 2.2 Common World Model (CWM) The Common World Model (CWM) [5] defines and instantiates the data model for the intelligence architecture, providing a common, centralized intelligent data storage services. The world model is divided into three main concepts: Metric (sensor data and aggregates), Semantic (class descriptions and instances), and Self Information–data relative the robot, e.g., pose data. At the Semantic level, objects represent symbolic information, enabling abstract reasoning needed for intelligent behavior. Here, CWM maintains semantic information from perception modules, and provides methods for client modules, e.g., the navigate action, to search for semantic objects that are relevant to a specific mission context with a set of filtering criteria. 2.3 Mission Planner The goal of the mission planner is to take commands in the mission vernacular from a teammate (via the MMI) and convert them into a sequence of actions (TBSs). We leverage recent work in ACT-R [1] on models of instruction following in the form of decision graphs, where the decisions themselves are made based on examples of past decisions in the form of Instance-Based Learning [13]. This research uses a single model of decision-making in which more instructions and examples can be included in the system in the form of “chunks”—ACT-R representations of semantic information. The goal of this new model is to provide increased flexibility in adding new examples to the model, which, in turn, allows the model to plan for new missions, as well as in combining generalizations from multiple examples. [] Fig. 4. Examples of object detection. Final detections are shown as red solid rectangles and rejected false positives as blue dashed rectangles. 2.4 Perception We first describe four sensor-based perception modules in our system. Additionally included in this section is perception through prediction. Semantic Classifier. An online scene labeler is used to find buildings, vehicles, traffic barrels, and fire hydrants, and to classify background, e.g., trees, asphalt, concrete, gravel, or grass as shown in Fig. 6. Our approach builds on the Hierarchical Inference Machine [18], a scene labeling method that decomposes an image into a hierarchy of nested superpixel regions. Rather than perform inference on a graphical model, which can be expensive, we instead train a decision forest regressor with 10 trees and the segmentation hierarchy of depth 7 for predicting label distribution. We use SIFT [16], LAB colorspace statistics, and texture information derived from convolving the image with a bank of spatial filters, in addition to statistics on the size and shape of a superpixel region. We process a [$$640 \times 384$$] image in approximately 2 s on a dedicated quad-core i7-3615QM at 2.3 GHz, with feature extraction being the dominant cost. Object Detector. We employ an Active Deformable Part Models (ADPM) method [23] for on-board object detection on our system. ADPM is an accelerated DPM that dynamically schedules parts and prunes locations in a cascade framework. With the current MATLAB/C++ implementation, ADPM simultaneously detects 5 classes on a 10 MP image at 0.5 Hz on a modern CPU. ADPM employs a sliding window approach at multiple image scales to detect objects at different positions and distances. In order to reduce the number of false positives, the detection hypotheses are further pruned using LADAR measurements as shown in Fig. 4. [] Fig. 5. Examples of door detections are shown. Façade detection and door candidates are shown on the left. Final detection output is shown on the right. Door Detection. Detecting doors imposes a unique challenge because doors undergo severe perspective distortion under different viewpoints. Based on the intuition that doors should be seen as a rectangle at a frontal (canonical) viewpoint, each façade candidate is mapped to the image domain according to the known calibration of each sensor. We preprocessed each candidate façade for door detection as follows: façade regions in the image are rectified using the estimated plane orientation in 3D and resized to a fixed scale such that the rectified façades are (virtually) observed at a fixed distance. Due to the canonicalization, the pose and scale variation of doors in the façades can be eliminated. On top of the rectified façades, a Deformable Part Model based door detector [8] is applied. Since the façades are standardized in canonical view and fixed distance, detection can be performed online because searching in a single scale space is sufficient to detect doors as seen in Fig. 5. Human Detection. One of the main objectives of the human-robot team is to identify potential human threats, which would feed directly into the observe action as the architecture is currently laid out. A tree-structured Deformable Part Model [22] was chosen as the state-of-the-art algorithm to perform this task. Given a rectified image, the algorithm reports the locations of 26 individual parts for each detected person. Our contribution is to port the feature pyramid processing code to run on a FPGA or GPU while the rest of the code runs as a module on a separate laptop. Using the current system architecture, streaming [$$1020 \times 768$$] images from the camera, and processing all scales within the human detection algorithm runs at a 0.5 Hz processing rate. Additional LADAR processing is included within the observe action to better discriminate humans from other arbitrary objects. 2.5 Object Prediction In addition to those approaches that use actual sensors to detect objects or humans in the robot’s environment, we also utilize language inputs to perceive objects, primarily in the part of the environment that the robot has not directly explored. The current approach hypothesizes an object when two conditions are met: symbol grounding fails to map a symbol to an object in the world model; and there are areas that satisfy the spatial constraints but have not been explored by the robot. Given a language phrase l that describes a target object with spatial constraints relative to a reference object o, we sample a set of candidate locations from a discretized 2D map defined in [$$X \times Y$$] space. A predicted object is created in an unseen location (x, y) that best satisfies the given spatial constraints: [$$(x,y) = \arg \max \_{(x',y') \in X \times Y} k(x',y') \phi (x',y',l,o),$$] where k(x, y) is a binary indicator with value 0 for free space (i.e., no detection) that has been visited, 1 otherwise; and [$$\phi $$] is a function that represents how well a given location (x, y) satisfies the spatial constraints l relative to a reference object o. 2.6 Structured Command Grounding The symbol grounding algorithm takes as inputs a TBS command and a set of semantic objects in the world model, and grounds each object symbol referenced in the TBS to an object instance in the world model. Spatial constraints specified in the TBS are evaluated in a robot-centric manner, i.e., a spatial relationship relative to the position of the robot at the time when the command was given. We first use a log-linear model to represent the probability that an object in the environment satisfies a given spatial relation. Given an object, this probability is defined as a function [$$\phi $$] of weighted sum of the object’s spatial feature values. The spatial features used here include the distances and the angles between the centers of objects and the robot. A weight vector of each relation is learned by maximizing the log-likelihood of all the training examples using gradient descent with the [$$l\_1$$] regularization. For details, we refer to previous work [3]. 2.7 Actions: Tactical Behaviors An action implements a specific tactical behavior of a robot. Currently supported actions include: navigate (Fig. 6), search, observe (Fig. 7), bump, go-to-xy, and wait; here, we describe navigate as an example. Navigate. Semantic navigation [19] differs from path planning with regards to the expressiveness of its command, as shown in Fig. 6. In contrast to the go-to-xy action, for instance, where a goal is specified in map coordinates, a destination can be described using its spatial relationships with landmarks in an environment. Additionally, a navigation mode can also be specified to instruct a robot to move quickly or more covertly depending on the characteristics of a mission. [] Fig. 6. Navigate: Given a command, “Stay to the left of the building; navigate quickly to the back of a traffic barrel that is behind the building,” a robot navigates to the left of the building toward a hypothesized goal, a traffic barrel in the back of the building. [] Fig. 7. Observe: a static, focused action where the robot registers human detections and reports them to the world model. Once the observe action starts running, it begins listening to the output from the human detector that is already sending human detection messages. 3 Experimental Results To assess the ability of the intelligence architecture to use different capabilities, the system was tested in various mission scenarios. A human teammate used speech and gestures to command each mission through the MMI, and the robots performed the mission autonomously for the entire duration. We evaluated the robot’s performance both via human assessment and via comparisons against human performance on similar tasks. [] Fig. 8. An experimental setup: Two replica of Clearpath[$$^\mathrm{TM}$$] Husky unmanned ground vehicles equipped with the General Dynamics XR 3D LADAR sensor and Adonis camera were used. Table 1. Results on the four vignettes involving navigation (against results from 2013). +------+------+---------+----------+-------------+-------------------+----------------+------------+ | IDs | Runs | Site | Task (%) | Time (min.) | Dist. (m) | Weather | Errors | +:=====+:=====+:========+:=========+:============+:==================+:===============+:===========+ | V1 | 6 | Bar | 87 | 5.8 | [$$36.4\pm 0.5$$] | 3 rain, 3 sun | 2 comm. | +------+------+---------+----------+-------------+-------------------+----------------+------------+ | V2 | 4 | Church | 80 | 5.5 | [$$52.7\pm 2.1$$] | 3 sun, 1 cloud | grounding | +------+------+---------+----------+-------------+-------------------+----------------+------------+ | V3 | 4 | Church | 75 | 3.5 | [$$23.0\pm 0.0$$] | 1 sun, 3 cloud | 2 software | +------+------+---------+----------+-------------+-------------------+----------------+------------+ | V4 | 3 | Bar | 93 | 5.7 | [$$31.3\pm 1.5$$] | cloud | 1 battery | +------+------+---------+----------+-------------+-------------------+----------------+------------+ | 2013 | 20 | Various | 50 |   | | snow, ice | various | +------+------+---------+----------+-------------+-------------------+----------------+------------+ 3.1 Evaluation by Human Experts Performances on screening missions: The complete runs involved two building sites, the Church and the Bar, requiring the robot to navigate 20–60 m to achieve the mission. Total of 17 runs were graded on a 0–100 scale by increments of 20. Table 1 contains the overall human evaluation. When compared with an earlier performance, there has been a significant improvement. In previous results, on a similar set of navigation tasks, the average completion rate was 50 % (where only 30 % received full scores) [14]. Overall, the system consistently executed the screening mission, with 11 of 17 runs scored at [$$100\,\%$$]. Of the remaining runs, 3 failed due to low batteries or software crashes, 2 because of the communication system, and 1 because of a symbol grounding error. Performances on semantic navigation: Table 2 summarizes the experiments from two distinct outdoor environments. The first set of experiments was conducted as part of a larger system assessment in a physically simulated town with 12 buildings in 1 km[$$^{2}$$] outdoor space in a military training facility. A qualitative summary from this set of experiments was reported in [15]. This set of experiments consisted of 57 runs that are 2 replications of 30 commands divided into 12 vignettes–i.e., world configurations. The second set of experiments was carried out in a parking lot of a large, irregular-shaped building. The background in this environment was natural but highly cluttered. In the vignette where the robot was facing the large building, the robot performed poorly because there were many unknown objects on which the recognition algorithm had not been trained. The performance in those vignettes involving known objects was highly reliable, resulting in the average completion of 100 % and 86 % in the complete and the incomplete information cases, respectively. [] Fig. 9. Given a command “Navigate to the back of the building,” this example compares a robot’s navigation path against those of 82 human users. Table 2. Outdoor semantic navigation completion rate (%) with complete vs. incomplete information (The number of runs is in parenthesis). +------------------+----------------------+------------------------+ | Environment | Complete information | Incomplete information | +:=================+:=====================+:=======================+ | Simulated town | [$$94 \pm 13$$] (18) | [$$81 \pm 20$$] (36) | +------------------+----------------------+------------------------+ | Building outdoor | [$$100 \pm 0$$] (7) | [$$86 \pm 26$$] (13) | +------------------+----------------------+------------------------+ 3.2 Evaluation Against Human Performance on Similar Tasks According to our preliminary data collection on 20 subjects, human interpretation of a verbal instruction can vary significantly. Given a simple command, “go to a barrel that is in the back of the building,” 20 % (4 out of 20) of the subjects interpreted the command differently from the commander’s intention, and the paths chosen by the majority who chose similar goal positions as the commander also varied. Motivated by this result, we have collected a larger set of user data on interpreting navigation directions. We created a human intelligence test (HIT) on Amazon Mechanical Turk to collect the navigation paths selected by humans for a similar set of problems to the robot’s. Two out of 84 data entries were eliminated due to incompleteness. To compare the paths generated by a robot against that by a human, we used Frechét distance [7] that measures the distance between two curves. We sort the entries according to their choice of a goal landmark and their mode of navigation, e.g., left or right of a building. We computed Frechét distance between the robot’s path and the paths taken by the group of users who had made the same grounding decisions as the robot. We note that, in all 6 examples, the robot’s grounding choice agreed with that of the human majority. Path comparison: For each human turker, we computed Frechét distance between the path chosen by the human and that of the robot. In addition, we randomly selected another human turker and computed the distance between the paths chosen by the two human participants. The mean and the standard deviation of the Frechét distance for the example shown in Fig. 9 between the paths chosen by a robot and 69 human users who have chosen the same building as their landmarks (drawn in green lines) are [$$56.79\pm 14.37$$], whereas those between human users in that same group were [$$67.70\pm 83.19$$]. The t-test failed to reject the null hypothesis that there is no significant difference when a human generated path is compared against that of a robot or a human; the confidence interval at the 0.05 significance level was [$$[-34.29, 12.48]$$] on the example in Fig. 9. Task-level performance comparison: When evaluated based on the intended goal and landmark groundings, the accuracy of human participants was 68.9%. People performed better on path constraints, reaching 86.9% in accuracy. We also asked the participants to evaluate the paths generated by a robot given the same set of navigation commands. Based on the evaluation of 82 participants, the robot scored 86.0%. [] Fig. 10. Navigation paths with complete vs. incomplete information. 4 Main Experimental Insights Our approach takes advantage of additional information conveyed within verbal commands by a human teammate to improve a robot’s perception. Figure 10 shows progressive changes in the robot’s navigation plans as the robot drives from a partially known world to a known world by gradually acquiring information through perception. The blue dotted line shows the path that the robot would have taken if it had complete information about the environment at the time when the command was given; the red line is the actual path that the robot has taken; the green lines and magenta triangles show the paths and the goals, respectively, that the robot pursued during execution. In these runs, the robot’s early goals may not be precisely correct (because they were the hypothesized goals as opposed to those perceived) but generally guide the robot to a proper direction so that the robot can revise its plan for the actual goal when detected. These examples illustrate that the paths taken by the robot under incomplete information strongly resemble those that would have been taken under complete information. Our experimental results show that, in outdoor navigation, semantic understanding of an environment is still challenging and exploiting information from verbal directions can compensate significantly. In our previous experiments, the performance has been assessed only in terms of task completion as shown in Table 1. Here, we also evaluated the robot performance by surveying human participants on similar navigation tasks. Our experiments suggest that the paths generated by the robot resemble closely those generated by humans and that the robot performs comparably with humans. 5 Conclusion In this paper, we present an intelligence architecture for human-robot teams that has been fully integrated into a mobile robot platform. During extensive assessments on various screening missions, the system performed consistently and robustly, demonstrating the strength of integrated intelligence. We conclude that combining the latest perception technologies and reasoning about complex surroundings with additional capabilities, such as natural language understanding to follow instructions from teammates or predicting unseen environments beyond the ranges of sensors, can lead to a viable robot teammate for implementing high-level intelligence in real environments. Acknowledgments This work was conducted in part through collaborative participation in the Robotics Consortium sponsored by the U.S Army Research Laboratory under the Collaborative Technology Alliance Program, Cooperative Agreement W911NF-10-2-0016, and in part by ONR under MURI grant “Reasoning in Reduced Information Spaces” (no. N00014-09-1-1052). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. References 1. Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S.A., Lebiere, C., Qin, Y.: An integrated theory of the mind. Psychol. Rev. 111, 1036–1060 (2004)CrossRef 2. Barber, D., Howard, T.M., Walter, M.R.: A multimodal interface for real-time soldier-robot teaming. In Proceedings of the SPIE 9837, Unmanned Systems Technology XVIII (2016) 3. Boularias, A., Duvallet, F., Oh, F., Stentz, A.: Grounding spatial relations for outdoor robot navigation. In Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1976–1982 (2015) 4. Chung, I., Propp, O., Walter, M.R., Howard, T.M.: On the performance of hierarchical distributed correspondence graphs for efficient symbol grounding of robot instructions. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5247–5252 (2015) 5. Dean, R.: Common world model for unmanned systems. In: Procedings of the SPIE 8741, Unmanned Systems Technology XV (2013) 6. Duvallet, F., Walter, M.R., Howard, T., Hemachandra, S., Oh, J., Teller, S., Roy, N., Stentz, A.: Inferring maps and behaviors from natural language instructions. In: Hsieh, M.A., Khatib, O., Kumar, V. (eds.) Experimental Robotics. STAR, vol. 109, pp. 373–388. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-23778-7\_​25 CrossRef 7. Eiter, T., Mannila, H.: Computing discrete frechet distance. Technical report, Christian Doppler Laboratory, Vienna University of Technology (1994) 8. Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)CrossRef 9. Golland, D., Liang, P., Klein, D.: A game-theoretic approach to generating spatial descriptions. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 410–419 (2010) 10. Hawes, N., Klenk, M., Lockwood, K., Horn, G.S., Kelleher, J.D.: Towards a cognitive system that can recognize spatial regions based on context. In: Proceedings of the AAAI Conference on Artificial Intelligence (2012) 11. Hemachandra, S., Duvallet, F., Howard, T.M., Roy, N., Stentz, A., Walter, M.R.: Learning models for following natural language directions in unknown environments. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 5608–5615 (2015) 12. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the Neural Information Processing Systems, pp. 1097–1105 (2012) 13. Lebiere, C., Jentsch, F., Ososky, S.: Cognitive models of decision making processes for human-robot interaction. In: Proceedings of the International Conference on Virtual, Augmented and Mixed Reality, pp. 285–294 (2013) 14. Lennon, C., Bodt, B., Childers, M., Dean, R., Oh, J., DiBerardino, C.: Assessment of navigation using a hybrid cognitive/metric world model. Technical Report ARL-TR-7175, Army Research Labs, January 2015 15. Lennon, C., Bodt, B., Childers, M., Dean, R., Oh, J., DiBerardino, C., Keegan, T.: RCTA Capstone Assessment. In Proceedings of the SPIE 9468, Unmanned Systems Technology XVII (2015) 16. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef 17. Matuszek, C., Herbst, E., Zettlemoyer, L., Fox, D.: Learning to parse natural language commands to a robot control system. In: Proceedings of the International Symposium on Experimental Robotics (2012) 18. Munoz, D.: Inference Machines: Parsing Scenes via Iterated Predictions. PhD thesis, The Robotics Institute, Carnegie Mellon University, 2013 19. Oh, J., Suppe, A., Duvallet, F., Boularias, A., Vinokurov, J., Navarro-Serment, L., Romero, O., Dean, R., Lebiere, C., Hebert, M., Stentz, A.: Toward mobile robots reasoning like humans. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1371–1379 (2015) 20. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of the Neural Information Processing Systems (2015) 21. Tellex, S., Kollar, T., Dickerson, S., Walter, M.R., Banerjee, A.G., Teller, S.J., Roy, M.: Understanding natural language commands for robotic navigation and mobile manipulation. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1507–1514 (2011) 22. Yang, Y., Ramanan, D.: Articulated pose estimation using flexible mixtures of parts. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1385–1392 (2011) 23. Zhu, M., Atanasov, N., Pappas, G.J., Daniilidis, K.: Active deformable part models inference. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8695, pp. 281–296. Springer, Heidelberg (2014). doi:10.​1007/​978-3-319-10584-0\_​19 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_29 EUROPtus: A Mixed-Initiative Controller for Multi-vehicle Oceanographic Field Experiments Frédéric Py¹  , José Pinto¹  , Mónica A. Silva²  , Tor Arne Johansen³  , João Sousa¹   and Kanna Rajan^(1, 3  ) (1) Underwater Systems and Technology Laboratory, Faculty of Engineering, University of Porto, Porto, Portugal (2) IMAR-Açores & MARE – Marine and Environmental Sciences Center, Horta, Portugal (3) Department of Engineering Cybernetics, Center for Autonomous Marine Operations and Systems (AMOS), Norwegian University of Science and Technology, Trondheim, Norway     Frédéric Py (Corresponding author) Email: fredpy@gmail.com   José Pinto Email: zepinto@fe.up.pt   Mónica A. Silva Email: monica.silva.imar@gmail.com   Tor Arne Johansen Email: tor.arne.johansen@ntnu.no   João Sousa Email: jtasso@fe.up.pt   Kanna Rajan (Corresponding author) Email: kanna.rajan@ntnu.no Abstract Our research concerns the mixed-initiative coordination of air and underwater vehicles interacting over inter-operated radio and underwater communication networks for novel oceanographic field studies. In such an environment, operating multiple vehicles to observe dynamic oceanographic events such as fronts, plumes, blooms and cetaceans has required that we design, implement and operate software, methods and processes which can support ephemeral and unpredictable observations (including those of moving animals) in real-world settings with substantial constraints. We articulate an approach for coordinated measurements using such platforms, which relate directly to task outcomes. We show the use and operational value of a new Artificial Intelligence (AI) based mixed-initiative system, EUROPtus, for handling multiple platforms from a recent field experiment in open waters of the mid-Atlantic. Keywords Marine roboticsOceanographyArtificial IntelligenceMixed-initiative control 1 Introduction Recent advances in robotic vehicles have made ocean observation more sustainable with the use of autonomous and semi-autonomous platforms to observe at varying spatio-temporal scales. The principal challenge however, is to observe the water column not just with point-based observations as in traditional oceanography, but across the mediums of air and water, and the air/water interface, and doing so continuously. Such observations need not only to be synoptic, but also to be coordinated across space and time to requiring coordination and control of a range of robots with appropriate sensors, which necessitates the use of multi-platform systems to observe in the meso-scale (>50 km[$$^2$$]), and to follow phenomena of interest such as blooms, plumes, anoxic zones, fronts over a period of days, weeks or longer. Our experimentation [1–5] has led to the conclusion that multiple vehicles, operating in aerial, surface and underwater domains, are critical for such observational needs especially in the study of our evolving planet. [] Fig. 1. Typical setup of operators on ship/shore for vehicle control in oceanographic field operations. Our work therefore, deals with the operation of networked heterogenous robotic platforms while dealing with unreliable communications in the harsh conditions of the open ocean. Uncertainty in sensing, control and actuation and operational unpredictability is the norm, making operations of such platforms challenging in this environment. Further, the target applications to date, often involve dynamic and unpredictable phenomenon in unstructured environments with the use of autonomous underwater vehicles (AUVs) as well as unmanned aerial vehicles (UAVs). UAVs offer speed and the agility for synoptic observations; AUVs can then ground-truth and collect in-situ data with an array of sensors in the water column. Put together, autonomous robotic operations are key for sustained oceanographic observations in the light of such operational constraints. This work concerns the mixed-initiative coordination of air and underwater vehicles interacting over inter-operated radio and underwater communication networks for oceanographic field studies. In such an environment, operating multiple vehicles to observe dynamic features, including motion of cetaceans or oceanographic phenomena such as fronts, plumes and blooms has required that we design, implement and operate software, methods and processes which can also support opportunistic goals amid tight operational constraints in real-world settings. Robot tasks deal with coordinating and completing observation and sampling tasks in the air, surface or underwater domains for observing the same patch of the ocean co-temporally. We articulate an approach for coordinated measurements using such platforms, which relate directly to task outcomes. We show the use and operational value of a new Artificial Intelligence (AI) based mixed-initiative system, EUROPtus, for handling multiple platforms (Fig. 2). Coordination in this context implies the ability to envision task completion in the light of dynamic surroundings while dealing with operational constraints. While coordination in marine robotics has typically been viewed as a means to demonstrate nominal engineering principles [6, 7], our work is focused towards solving specific oceanographic needs. Our robotic platforms are tied together with a mature set of software tools for decision support, planning, control, data visualization and archiving provided by a toolchain [8]. [] Fig. 2. AUV (left) and UAV (right) launch operations from the NRP Gago Coutinho in the Açores, July 2015. In operational oceanography, traditionally a graphical interface is used as a planning tool to offer decision-support capability with humans making all decisions a priori for a very uncertain, harsh and dynamic setting (Fig. 1). Our objective is to augment such methods with automated planning [9] to allow a single operator to command multiple heterogenous vehicles in real-world settings. EUROPtus is a mixed-initiative constraint-based automated planner which aids such field campaigns in coordination to ensure that operational goals are satisfied during mission operations. The planner does not model all constraints comprehensively, but is used primarily for task assignment, information gathering and situational awareness of AUVs, all the while keeping a simple resource model of UAV operations to model when tasks need to be outsourced and operators need to be engaged. The application domain involved using UAVs to spot cetaceans providing a GPS fix on their coordinates as a means for targeted in-situ measurements with AUVs. While EUROPtus was designed and field tested in the specific operating environment for our experiment at sea in the waters off of the Açores¹, we believe the principles behind it are general. This paper is organized in the following manner. We briefly place the context of this work in Sect. 2. We start by describing the scientific domain in cetacean tracking in Sect. 3 and then move to the core of the paper in Sect. 4 describing technical details of EUROPtus’s representation and reasoning mechanism. We describe the tools, techniques and process we use in Sect. 5 and its use in the experiment in Sect. 6. We conclude with lessons learned and some ideas for future work in Sect. 7. 2 Related Work AI based mixed-initiative methods for planning, continue to be novel in the oceanographic domain. In recent work [10] has used an automated planner for controlling multiple AUV; EUROPtus moves beyond this work in multiple ways. First, the underlying plan representation allows for replanning and reasoning about time and resources. By using the identical plan representation and reasoning formulations on ship/shore with an embedded controller, we can seamlessly transfer partial plans for commanding a vehicle. Second, EUROPtus deals with heterogenous vehicles and their operational complexities. Finally, we use active simulation coupled with the expected execution trace while keeping the operator situationally aware. In EUROPtus we are informed by efforts on the command/control of the Spirit and Opportunity rovers on Mars [11, 12] using a similar mixed-initiative approach. The operating domain at sea however, is harsher and substantially more constrained with a more dynamic pace and operational fluidity. Using automated planning as a means to provide abstraction in control over single or multiple vehicles in the oceanographic domain, continues to be novel. Robotic vehicles, UAVs in particular, have been used for studies in animal ecology in the open ocean [13]. However, our work here, more closely supports upper water-column biology as a side effect of cetacean transit. 3 The Scientific Domain The overall scientific objectives of tracking cetaceans in the open ocean are driven primarily by the need to understand the drivers behind movement behaviour of large marine predators; in particular to collect data to interpret how ecological conditions, including inter-and intraspecific, environmental, trophic and social interactions shape their decisions. The open-ocean and specific habitat features, such as seamounts, fronts, eddies, clines, and other provide the oceanographic phenomenological context within which to study such animal behavior. The Açores, in particular, presents the ideal set of conditions to pursue this goal, bringing together multi-taxa opportunities, easy access to those habitats, access to facilities and vessels as well as a team working on top predator biotelemetry with extensive field and data analysis know-how. The long-term concept to obtain such observations, amounts to near real-time 4D (space X time) tracking of large marine predators and synoptic, dynamic prey and oceanographic sampling. The concept reverses the current paradigm on oceanic observatories: instead of having a static array of equipment fixed at a locality that is only effective when the animals wander inside its influence area, the concept would be to have a dynamic laboratory that can be deployed around and follow focal individuals/groups to collect the relevant data in the surrounding environmental ‘bubble’. Specific scientific objectives in this domain include, understanding the physical and biological mechanisms promoting prey abundance and aggregation, understanding the relationship between different prey field properties (e.g. overall distribution and density, patch patterns) and the cetaceans’ foraging decisions and determining whether environmental properties measured in upper water column can be useful at predicting prey properties and foraging behavior at depth. In addition to environmental factors, foraging decisions of animals are likely to be affected by multiple ecological and intrinsic factors. Typical cetacean tracking scenarios can be combined with the collection of other data that may provide new insights into the role of some of these factors. For example, collection of biopsy, fecal, and blow samples of tagged whales to do molecular sexing [14] and to assess their reproductive and nutritional state. Still images taken by an AUV to measure body length and width of surfacing whales, could serve as a proxy for age and nutritional status. Intra and interspecific competition may also constrain foraging strategies of animals, for instance, by limiting subdominant species/individuals to habitats with lower prey availability or by changing preys behavior, distribution or aggregation patterns. A first step towards understanding the influence of density-dependent factors would be conducting surveys simultaneously with the whale tagging and environmental sampling. Doing this with traditional methods would be prohibitively expensive but AUVs and UAVs offers a unique opportunity with minimal costs and personnel, to obtain data on the total number of whales foraging on the study area, allowing us to examine if and how whale abundance influences prey patterns and whale behaviour. Together, these provide the relevant rationale and entry point for robotic vehicles, including AUVs, UAVs as well as ASVs (autonomous surface vehicles) as a viable set of observational tools for such studies. However, the science needs and intent outstrip the available technology at this stage, especially in targeting fast moving cetaceans and at depth. This experiment was therefore conceived as a starting point for long-term research and inter-disciplinary collaboration with a unique set of constraints, driven by the key scientific objective to characterize the upper water-column environment, primarily to ask the question “Why are cetaceans foraging at this location?”. This work also builds on our previous efforts in tracking much smaller fish in space and time [3, 15]. 4 Technical Approach Mixed-initiative methods are used to provide a human operator situational awareness and commanding capability for robotic vehicles where the mapping is one-to-one between operator and robot. Our aim is to simplify further by offloading the operator’s cognitive burden especially in coordinating multiple heterogenous vehicles. We do so leveraging constrained-based temporal planning [9]. While the focus is on human-in-the-loop interaction, our system interacts with the T-REX plan-execution system embedded onboard our AUVs [16–18]. Both EUROPtus and T-REX rely on the same rich representational formalism for plan synthesis; we outline this briefly. Additional details can be found in [16–19]. 4.1 Representational and Planning Framework Traditionally robotic execution has relied on dispatching commands at precise times with an executive. Such linear sequences of precisely timed commands give no ability to adjust execution on the basis of sensory information. The consequences of the intrinsic inflexibility of such sequences are critical; because they are inflexible, sequences are brittle and therefore must necessarily be designed considering worst case scenarios. EUROPtus’s representation significantly broadens the way robots can be commanded [20] with the interpretation of a temporally flexible plan which represents each start time as a flexible timepoint variable backed by an explicit network of bounded delay constraints between such temporal variables. These timepoints [21] represent temporal intervals signifying a change in state and are of the form [lb, ub] (where [$$lb, ub \in N$$]) are temporal lower and upper bounds respectively. Instead of specifying a fixed integer, time is represented as an interval (see Fig. 3) leaving room for adaptation at execution time while representing uncertainty (in outcomes or the environment). When the executive considers when to start a task, it propagates information through the constraint network, computes a time bound for the variable, selects an actual execution time within the bound, and starts the task at that time. Temporally flexible plans therefore, express a range of possible outcomes of the robots interaction with the environment within which the executive can elect at run time. The fact that constraints are explicitly represented ensures that through propagation the executive will respect global limits expressed in the plan (e.g., don’t start a task until a certain condition has been satisfied) while still satisfying some overall deadline. [] Fig. 3. Tokens with flexible temporal intervals and parametric constraints between tokens. This example shows the triggering of an AUVs water sampler based on a feature threshold while the vehicle Yo-Yo’s. The Waypoint\_Yo-Yo token has a flexible duration, start & end times. A plan is composed of temporally scoped predicates called tokens. A token can be described as a first order logic predicate, with its associated temporal scope (start, duration, end) using flexible interval arithmetic [22]. All the attributes of a token are described as a domain of possible values for this token in the plan context. In order to be part of the plan, a token needs to be associated with one timeline only. A timeline is a sequence of tokens describing the evolution of a state variable while enforcing a strict sequence of tokens with no concurrency within the timeline. Concurrency between timelines and therefore between tokens on separate timelines, is the basis for concurrent state variable evolution. Tokens are causally linked by rules in a domain model that describe temporal relations and/or causality links between tokens [16, 23]. Finally a token can be marked optionally as either being a Fact or Goal. While a Fact requires no justification, a Goal will need not only to be inserted in the plan but necessarily have a causal chain connecting it to one or more Facts. The underlying planner for EUROPtus, is EUROPA [19, 23]; while the key concepts are not tied to this specific planner, our implementation relies heavily on its flexible temporal representation as also the basic principle of searching in plan-space [9]. The planner works by continuously repairing flaws in a plan until no more flaws are present; for execution, all partial plans must not have any flaws. Typically we deal with two types of flaws: an open condition where a token is not associated with a timeline and can be resolved by either inserting it into a specific timeline, or merging the token with one that is compatible. Or a threat where an inserted token may impact others indirectly through possible overlapping requirements. The plan solver then needs to enforce a scheduling constraint on those potentially conflicting tokens so they cannot overlap. The solver resolves these flaws until either it reaches an inconsistency (i.e. a situation where plan constraints cannot be satisfied), in which case the planner will backtrack to explore alternate solutions; or a consistent solution is found and the plan presents no further flaws generating a valid solution. Additionally, while the plan might be complete, it does not have to commit to the value of its variables. For example, the start time of a token can be left to be the interval [1, 10] as long as it does not present a threat to the partial-plan. This leaves the decision of the start time to the executive which is critical for operating in uncertain real-world environments. Details of the solver and the planning engine are outside the scope of this paper and can be found in [18]. 4.2 Planning and Execution with Asynchronocity Typically in fully autonomous systems such as T-REX, planning and execution are intertwined. Both manipulate the same plan representation and plan execution is required to occur at every clock cycle. This influences in turn, the outcome of the planning process while ensuring that any plan produced by the agent is taking into account state evolution with the assumption that the agent is within a synchronous and fully observable world. Such a design, while appropriate for embedded systems, is incompatible with communication challenged field operations where observations are often asynchronous in nature, especially where coordination with humans on launch and recovery operations can be complex. Further, observations from the robots can arrive sporadically, as with our AUVs, and observability is limited by the availability of an acoustic channel to the vehicle. Execution can still be integrated into a deliberation process such as planning; but we go a step further in considering that execution is a part of planning. Execution feedback is integrated into deliberation by taking advantage of the way the planner works; the planner will stop searching as soon as no more flaws are found and the introduction of a new token (including Facts) in the plan create new flaws in the plan. Example: Consider that the planner has no more flaws and that the next command to be executed has been already dispatched (see below). Because of communication latencies, which may result from intermittent connectivity, we receive feedback from AUV[$$\_a$$] that indicates that its state changed from Inactive to Operating an hour ago. Our approach is then to create the Fact token Operating for AUV[$$\_a$$] starting at the time corresponding to an hour ago. This token is added to the plan generating a new Open condition flaw that the solver needs to resolve. The resolution can be either as simple as a token merge, if the token reflects exactly what was planned, or it is conflicting with the partial plan, requiring in turn, the planner to backtrack. The insertion of such a token as a Fact however, is akin to plan recognition, i.e. an agreement with the currently maintained partial plan structure. As the planner operates continuously keeping its search state alive, this recognition can impact the search by forcing the planner to backtrack over past decisions until it finds an alternative new solution given the injected Fact. As long as we assume that the decisions impacted by the new observation are not close to the root of the search tree it allows for a plan resolution in few steps without an adverse impact to performance. Further, such an assumption is reasonable as often past observations arrive in relative chronological order rarely impacting the distant past in plan history. These steps also highlight how execution tracking can be a pure deliberative task within the same planning engine. A remaining problem is to decide which part of the plan is ready to be executed and sent to a vehicle via NEPTUS, a visualization, planning and situational awareness tool. Many actions in a partial plan can be conditioned by the need to observe a situation. Actions are specific tokens introduced in EUROPA, that instantiate a causal relationship between their condition and effect tokens. It reflects the classical approach to describe a plan domain [9] but is substantially more expressive and allows its use for additional semantics. An action is a special kind of token with temporal relations expressed either as conditions necessary for its execution, or expected effects of this actions [18]. To command AUVs for a survey we needed EUROPtus to observe that: an AUV is ready and in the Inactive state and that we have a cetacean position update that is at most 30 min old; this heuristic was imposed to ensure “freshness” of cetacean tracks. When deciding on dispatching a partial plan, EUROPtus does an analysis of the causality structure of an action with its tokens. It does so by introducing the notion of a Justified token as: a token is Justified if either it is a Fact, the condition of a Justified action or it is an action for which all the effects are Justified. In a complete plan, an action in the plan can be dispatched for execution when all its conditions are Justified and its start time interval contains the current time. While action justification is reasonable, we need to ensure that we do not dispatch the action before its valid start time. However, we could potentially be in a situation where an action does not have all its justified conditions due to one or more missing messages and yet its start time is before the current time (i.e. it should have been dispatched for execution in the past). Dispatch is then postponed and we address this as a Pending action flaw, which applies to any action of the plan that could start at the current time but does not have all of its conditions Justified nor is it Justified itself. Its default resolution is to restrict the start time to be postponed. Consequently the planner needs to postpone dispatching this action until either new observations justify the action or the start time can no longer be pushed; the latter triggers a backtrack for an alternate solution. Any pending action that has all of its conditions Justified is dispatched to NEPTUS for execution which eventually will receive the observation of its completion (from the vehicle) and report it to EUROPtus. This results in a control loop that is managed as a pure continuous deliberation process governed by the principles just described. The position update then comes into play for the AUV operations model. As the AUVs are driven by an embedded T-REX agent, EUROPtus could be further extended to directly leverage such a positional update for “direct” control of the vehicles. Yet the limitation in terms of communication had to be taken into account. Our AUVs can communicate with the ship only if they are at the surface and either in Wifi range (which might also be not desirable as the ship can present a threat if operated too close to the vehicle) or long enough to the surface so it can initiate a satellite connection. When the vehicle is underwater, it was assumed that there is no means to communicate². 4.3 Mixed-Initiative Interaction EUROPtus deals with operational constraints for field experiments. It leverages and builds on existing work in fully-autonomous AUV operations. It has a simple resource model which is integrated into the planner to enable as complex a coordination model as necessary. By using EUROPtus, the operator offloads a portion of the planning task. This is important when considering networks with very high variability of vehicle configurations and capabilities. However, the operator is still in charge of providing high-level goals, and supervising the plans sent to the vehicles. When EUROPtus determines so, it can request new objectives for vehicles deployed in the field or prompt humans behind an operations console. For instance, operators are expected to execute some task and provide input (e.g. inspect collected data and determine a list of waypoints to be visited) or, EUROPtus generates a high-level objective that is sent to an autonomous vehicle to replan in-situ accordingly. If a new plan is found, this is reported to the consoles from which operators can both provide new objectives and inputs, and/or recall existing objectives. This is the focus of such mixed-initiative interaction between vehicle(s) and operator. 5 Experiment Infrastructure Our robotic hardware consisted of multiple Light AUVs [24] and customized Skywalker X-8 UAVs. The architecture for command and control is built on top of a mature toolchain [8], which has a back-seat driver API to the DUNE software which is in charge of navigation, logging and management of all communications onboard the vehicle. DUNE allows external controllers to provide desired poses for the platform while receiving progress updates on their attainability. This allows for the development of external controllers which are not tied to specific vehicle hardware allowing DUNE to use provided slack to improve vehicle safety, navigation or battery optimization. Using this API, embedded on the AUVs is T-REX [16, 18]; when it receives a goal, its objective is to synthesize partial plans and execute these while simultaneously monitoring execution onboard the vehicle. We use a publish-subscribe message-centric system, IMC [25], that is used for state updates all throughout the system, and to send commands and high-level objectives to the platforms. NEPTUS provides visualization, situational awareness and commanding with human operators. NEPTUS consoles consist of an empty canvas populated with visual widgets and map layers that reflect the data received and allow interaction with the network over IMC. These together are the components of the toolchain; EUROPtus augments them with a shore/ship side component. For this experiment we consider two types of assets; a UAV that is operated with human-in-the-loop waypoint-based control and AUVs running T-REX which can receive survey objectives via a timeline goal from EUROPtus. The UAV launch and recovery operations involves substantial human involvement and its operation currently requires close monitoring. Consequently, the basis of interaction with EUROPtus is simplified as follows: - EUROPtus can request a new deployment through NEPTUS directed to the UAV console operator - the maximum UAV operation time is assumed to be 30 min given typical battery life - replacing a UAV battery on shore/ship including recovery and new launch will take approximately 15 min between 2 surveys³ - a cetacean position update is expected to be the observed outcome of a successful UAV survey and when available generates a goal within EUROPtus which is instantiated on a timeline EUROPtus approximately models UAV operational constraints and its interaction is indirect as it sends a message to the UAV operators console to trigger the operators response. Its observations similarly are driven by a human operator’s event when s/he manually identifies and “marks” a cetacean position on NEPTUS. The EUROPtus AUV model is based on: - A timeline representing the vehicle position updated whenever a position update is received and placed in the timeline according to its observation timestamp - A set of possible surveys both parameterized with their scale in meters (representing the outer box surrounding the survey), its centroid (represented by a latitude and longitude) and its orientation (a rotation angle) [26]. From this information and the AUVs known nominal speed and location, EUROPtus is able to identify both the entry and exit location along with the completion time of this survey - The high level operational state of the vehicle being either Inactive, Operating and Survey the latter which takes as an argument a fully instantiated survey as above Typically within EUROPtus the AUV’s overall state cycles between Inactive, Operating and Survey in executing a survey. The Operating duration depends on the scale of the survey and the distance from the survey start point (which should be the last position observed when the vehicle is Inactive and on the surface. Survey being a goal state, its duration is short since it is used as a feedback confirming the successful completion of the survey request. This timeline – along with the AUV position – effectively produced by T-REX onboard, is the means of interaction between the AUV and EUROPtus on ship. Time-stamped messages from the AUVs, as noted, come with significant time delays. This means that the executive associated with EUROPtus should not only be able to dispatch the actions in a timely manner, but also to integrate observations from the past and, when required by the model, delay the execution of a specific action until its conditions are effectively observed. Our experiment required having multiple vehicles disconnected for substantial (15–30 min) periods of time. In order to improve the operators’ situational awareness of vehicle positions and progress, we used a number of simulated vehicles running off commands sent to the actual platforms. As a consequence, the simulated vehicles execute the identical set of commands albeit in an idealized environment. While doing so and in periods of loss of contact, operators can determine with clarity, what each asset is expected to be doing. In the case of AUVs, any updates received over any available communication channel, are used to reset the simulated vehicle’s localization filter. Operators interact through different NEPTUS consoles which are adapted to mission and operator-specific needs. EUROPtus was implemented to orchestrate the operational setting involving multiple assets. For this purpose, a special NEPTUS plug-in acts as a routing device for incoming and outgoing messages. The plug-in redirects data from EUROPtus to the controlled/simulated AUVs and, at the same time reports all received updates to EUROPtus; conversely this plug-in also notifies UAV operators about requests. [] Fig. 4. The operational architecture of EUROPtus used in the experiment in comparison to more typical approaches shown in Fig. 1. 6 Experiment Setup Our experiment was in open waters south of Pico Island in the mid-Atlantic Açorian archipelago (Fig. 5). All operations were carried out on board the Portuguese Navy research vessel, the NRP Almirante Gago Coutinho with AUV and UAV launch and recoveries from the aft deck; all operators and pilots were onboard the vessel. Multiple (sectorial and omnidirectional) antennas for providing 802.11 WiFi and Airmax coverage via ubiquity radios on UAVs were provisioned with multiple communications gateways via a local area network onboard the vessel, allowing operators to be simultaneously aware of AUV and UAV operations (Fig. 4). The AUVs were equipped with RBR XR620 CTD (Conductivity, temperature and density) probe, WHOI acoustic modems, an Iridium SBD modem for satellite communication, an experimental Holographic imager and a Turner Designs Cyclops-7 wet-probe with a fluorometer. Skywalker X-8 UAVs at our disposal were equipped with Far-IR cameras, with one vehicle equipped with a new light-weight hyper-spectral imager. [] Fig. 5. The open water location of the experiment 15 Nautical miles south of the island of Pico in the Açores. Our scenario called for either an experienced human observer on a separate boat, or airborne UAVs with real-time imagers, to detect foraging cetaceans at the surface. As animals were moving, a reference point was determined either from imagery or from the visual observer, which was communicated to the operators on the vessel. If AUVs were not already in the water, they were launched. Repetitive AUV surveys around this targeted spot were to commence, using two AUVs to measure spatial variations in the water column. Both T-REX-enabled AUVs were tasked by EUROPtus to synthesize yo-yo based survey patterns [26] in two concentric square patterns of [$$400\,\times \,400$$] m[$$^2$$] and [$$800\,\times \,800$$] m[$$^2$$] diving to a depth of 50 m. The vehicles also surfaced on the corners of the squares in order to localize with a GPS fix. The larger pattern was used to sample outside the foraging area, as a measure of understanding the variability in the upper water-column. The smaller survey was expected to take 25 min, while the larger 50 min at about 2.5 knots speed over ground for the vehicles. Consequently two inner surveys provided a dense coverage co-temporal to the single outer survey, (Fig. 6(a)). At the end of the survey, the next cetacean target was to be determined either via the visual observer or a UAV and the vehicles re-tasked by EUROPtus. One extended objective was to attempt not only co-temporal AUV surveys, but to coordinate the survey of the sampling area with UAV overflights with Near-IR and hyper-spectral sensors. In doing so, it was thought, we could obtain additional science data of the ocean surface to be merged and subsequently studied to understand the bio-geochemical composition in the light of any effluents from the passing cetaceans. This was successfully achieved by sharing of mission plans between NEPTUS consoles and by coordinating the execution of plans via EUROPtus. Figure 6(b), shows vehicles in the same operating area, one airborne, one at the surface and one AUV underwater (connected acoustically). At the same time, multiple NEPTUS consoles were receiving and controlling the different vehicles from the ship. In EUROPtus we tie the AUVs operational model with that of the UAV to ensure coordination; UAVs if not in the air were launched from the aft deck, to be surveying the same area overhead. [] Fig. 6. Coordinated observations of a UAV with two AUVs in the water-column on July [$$19^\mathrm{th}$$] 2015. Figure 7 shows EUROPtus’s deliberation steps and depth along the first hour of the July 19th mission. This figure shows the number of steps increasing monotonically reflecting the integration of new observations as they were received from NEPTUS. The depth grows slower showing that backtracking occurs during execution. Except for the small downward spike, the depth tends to either remain flat or climb, indicating that in general backtracking was only impacted by recent decision and hence the planner recovered gracefully. The downward spike was due to erroneous timestamps on some AUV location messages before they reached EUROPtus. Despite the large jump in steps ([$$\approx $$]700) the planner recovered in less than 3 s. 7 Lessons Learned and Conclusions In this paper we show how EUROPtus, a mixed-initiative planner aids coordinated operation of multiple heterogenous assets with unpredictability of the environment This work is informed from years of iterating toward a balanced approach between autonomous decision-making and adequate operator supervision. The multi-vehicle operation was carried out by a mix of operators and automated planners onboard the vehicles and at the control station. EUROPtus continues to be an experimental system in our attempt to understand the boundaries of autonomy and autonomous operations in the context of oceanographic field deployments. It shows promise in relieving the operator of the intricacies of synchronization between asset operation and focuses on higher-level coordination including scientific goal-achievement and operational safety. A more traditional waypoint-based planning tool, NEPTUS, was not only extended to dispatch commands from EUROPtus but to provide enhanced situational awareness with an overview of their expected behaviour to users. [] Fig. 7. Number of deliberation steps in EUROPtus and its search tree depth for a subset of July [$$19^\mathrm{th}$$] operations. EUROPtus was running on a Linux Virtual Machine on a 2.8GHz Intel Core i7. The addition of EUROPtus relieves operators of the intricacies of synchronization between assets and focuses more on abstraction such as the pursuance of scientific goals and the safety of the assets. To address the latter, NEPTUS was extended to not only dispatch commands from EUROPtus to the assets but also to intercept them in order to give an overview of expected behavior to the users. Its ability to handle observations with temporal delays and ability to dispatch commands only when all their conditions are observed proved to be effective. Operators consequently, could focus on high level operational concerns and rely on NEPTUS to maintain situational awareness for understanding the current status of the mission. Among its shortcomings however, is the way Pending action flaws are resolved. The only way the system can rectify them is by postponing the action until it has all of its conditions Justified. An interesting extension would be for the planner to actively enrich the plan by proactively asking for operator input. This’inquisitive’ approach would then prevent the system from replanning just because there was no feedback before a certain deadline. Another aspect we would like to explore is allowing EUROPtus to forget part of its search when it no longer impacts the plan. A novelty of EUROPtus, when compared to classical approaches, is that it keeps all of its search history. One benefit is that EUROPtus can revisit the past and restore a partial plan fragment in the light of a delayed observation or justification. A side effect is that the receding planning horizon can result in computational search being slower as time advances. A solution to this potential issue would be to prune nodes from the search tree as they are justified by observations. This would require replacing the chronological backtracking search allowing the system to run for sustained periods without a performance impact. One key insight has been that such a tool is critical and effective for operational oceanography even if more work needs to be done. The criticality has to do with the complexity of ship-based operations and with multiple assets with different operational envelopes. Coordinating a team of researchers dispersed in the ship dealing with launch, recovery and operational awareness requires a fine level of coordination which a tool such as EUROPtus can aid with and augment. A major take-away from this experiment overall, has been that by using autonomous systems to study the oceanographic environmental context would require greater coordination between vehicle operations and marine scientists, greater capacity of the operators and vehicles to quickly respond to the dynamics of the targets (whether they be animals or fronts) and vehicles and support vessels that can be quickly dispatched and recovered. To be effective, joint field work has to be simplified, operations must be based on small platforms, that can be easily maneuvered, ensuring real-time interaction between technologists and oceanographers. Acknowledgements Johansen was partly sponsored by the Research Council of Norway through the Centres of Excellence funding scheme, grant number 223254 – NTNU AMOS, and by grant number 235348. Rajan and Py were supported by the US Office of Naval Research, ONR Grant # N00014-14-1-0536. Silva was supported by FEDER and COMPETE funds and by FCT grant # IF/00943/2013. Silva was also supported by FRCT from the Government of the Azores. Authors are grateful to the Cetacean Ecology Research Group including Pablo Chellavard Navarro, Rui Prieto, Cláudia Oliveira, Marta Tobena, and skipper Renato Bettencourt, and to NTNUs UAV team, including Lars Semb, Krzysztof Cisek, Frederik Leira and João Fortuna. This work was partially supported by the SUNRISE - “Sensing, monitoring and actuating on the UNderwater world through a federated Research InfraStructure Extending the Future Internet”, project #611449, funded by the European Union’s Seventh Framework Programme for Research and Technological Development - Large scale integrating project (IP). Finally, we are grateful to the entire LSTS team that participated in the REP-15 exercise in the Açores. References 1. Ryan, J.P., Johnson, S., Sherman, A., Rajan, K., Py, F., Thomas, H., Harvey, J., Bird, L., Paduan, J., Vrijenhoek, R.: Mobile autonomous process sampling within coastal ocean observing systems. Limnol. Oceanograhy Methods 8, 394–402 (2010)CrossRef 2. Ryan, J.P., McManus, M.A., Kudela, R.M., Artigas, M.L., Bellingham, J.G., Chavez, F.P., Doucette, G., Foley, D., Godin, M., Harvey, J.B.J., Marin III, R., Messié, M., Mikulski, C., Pennington, T., Py, F., Rajan, K., Shulman, I., Wang, Z., Zhang, Y.: Boundary influences on HAB phytoplankton ecology in a stratification-enhanced upwelling shadow. Deep Sea Res. Part II 101, 63–79 (2014)CrossRef 3. Pinto, J., Faria, M., Fortuna, J., Martins, R., Sousa, J., Queiroz, N., Py, F., Rajan, K.: Chasing fish: tracking and control in a autonomous multi-vehicle real-world experiment. In: MTS/IEEE Oceans, San Diego, California (2013) 4. Faria, M., Pinto, J., Py, F., Fortuna, J., Dias, H., Leira, R.M.F., Johansen, T.A., Sousa, J.B., Rajan, K.: Coordinating UAVs and AUVs for oceanographic field experiments: challenges and lessons learned. In: IEEE International Conference on Robotics and Automation (ICRA), Hong Kong (2014) 5. Das, J., Py, F., Harvey, J., Ryan, J., Gellene, A., Graham, R., Caron, D.A., Rajan, K., Sukhatme, G.S.: Data-driven robotic sampling for marine ecosystem monitoring. Int. J. Robot. Res. 34(12), 1435–1452 (2015)CrossRef 6. Encarnaçao, P., Pascoal, A.: Combined trajectory tracking and path following: an application to the coordinated control of autonomous marine craft. In: Decision and Control, vol. 1, pp. 964–969. IEEE (2001) 7. Arrichiello, F., Das, J., Heidarsson, H.K., Sukhatme, G.S., Chiaverini, S.: Experiments in autonomous navigation with an under-actuated surface vessel via the null-space based behavioral control. In: Conference on Advanced Intelligent Mechatronics, pp. 362–367, July 2009 8. Pinto, J., Martins, P.S.D.R., Fortuna, J., Marques, E., Sousa, J.: The LSTS toolchain for networked vehicle systems. In: MTS/IEEE Oceans, pp. 1–9. IEEE (2013) 9. Ghallab, M., Nau, D., Traverso, P., Planning, A.: Theory and Practice. Elsevier Science, San Francisco (2004)MATH 10. Chrpa, L., Pinto, J., Ribeiro, M., Py, F., Sousa, J., Rajan, K.: On mixed-initiative planning and control for autonomous underwater vehicles. In: Proceedings of the Intelligent Robots and Systems (IROS), Hamburg, Germany (2015) 11. Ai-Chang, M., Bresina, J., Charest, L., Chase, A., Hsu, J., Jonsson, A., Kanefsky, B., Morris, P., Rajan, K., Yglesias, J., Chafin, B., Dias, W., Maldague, P.: MAPGEN: mixed initiative planning and scheduling for the Mars’03 MER mission. IEEE Intell. Syst. 19(1), 8–12 (2004)CrossRef 12. Bresina, J., Jonsson, A., Morris, P., Rajan, K.: Activity planning for the mars exploration rovers. In: International Conference on Automated Planning and Scheduling (ICAPS), Monterey, California (2005) 13. Hodgson, A., Kelly, N., Peel, D.: Unmanned aerial vehicles (UAVs) for surveying marine fauna: a dugong case study. PLoS ONE 8 (2013) 14. Cunha, H.A., Sole-Cava, A.M.: Molecular sexing of tucuxi dolphins (Sotalia guianensis and Sotalia fluviatilis) using samples from biopsy darting and decomposed carcasses. Genet. Mol. Biol. 30, 1186–1188 (2007)CrossRef 15. Sousa, L.L., López-Castejón, F., Gilabert, J., Relvas, P., Couto, A., Queiroz, N., Caldas, R., Dias, P.S., Dias, H., Faria, M., Ferreira, F., Ferreira, A.S., Fortuna, J., Gomes, R.J., Loureiro, B., Martins, R., Madureira, L., Neiva, J., Oliveira, M., Pereira, J., Pinto, J., Py, F., Queiro, H., Silva, D., Sujit, P.B., Zolich, A., Johansen, T.A., Sousa, J., Rajan, K.: Integrated monitoring of mola mola behaviour in space and time. PLoS ONE (2016). Accepted July 16. Py, F., Rajan, K., McGann, C.: A systematic agent framework for situated autonomous systems. In: 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), Toronto, Canada, May 2010 17. Rajan, K., Py, F.: T-REX: partitioned inference for AUV mission control. In: Roberts, G.N., Sutton, R. (eds.) Further Advances in Unmanned Marine Vehicles. The Institution of Engineering and Technology (IET), August 2012 18. Rajan, K., Py, F., Berreiro, J.: Towards deliberative control in marine robotics. In: Seto, M.L. (ed.) Autonomy in Marine Robots, pp. 91–175. Springer, New York (2012). doi:10.​1007/​978-1-4614-5659-9\_​3 19. Jónsson, A., Morris, P., Muscettola, N., Rajan, K., Smith, B.: Planning in interplanetary space: theory and practice. In: Artificial Intelligence Planning and Scheduling (AIPS) (2000) 20. Muscettola, N., Nayak, P., Pell, B., Williams, B.: Remote agent: to boldly go where no AI system has gone before. Artif. Intell. 103, 5–48 (1998)CrossRefMATH 21. Dean, T., Boddy, M.: Reasoning about partially ordered events. Artif. Intell. 36(3), 375–399 (1988)MathSciNetCrossRefMATH 22. Allen, J.: Towards a general theory of action and time. Artif. Intell. 23(2), 123–154 (1984)CrossRefMATH 23. Frank, J., Jónsson, A.: Constraint-based attribute and interval planning. Constraints 8(4), 339–364 (2003)MathSciNetCrossRefMATH 24. Sousa, A., Madureira, L., Coelho, J., Pinto, J., Pereira, J., Sousa, J., Dias, P.: LAUV: the man-portable autonomous underwater vehicle. Navig. Guidance Control Underwater Veh. 3(1), 268–274 (2012) 25. Martins, R., Dias, P., Marques, E., Pinto, J., Sousa, J., Pereira, F.: IMC: a communication protocol for networked vehicles and sensors. In: OCEANS 2009 - EUROPE, pp. 1–6 (2009) 26. Das, J., Py, F., Maughan, T., Messie, M., O’Reilly, T., Ryan, J., Sukhatme, G.S., Rajan, K.: Coordinated sampling of dynamic oceanographic features with AUVs and drifters. Int. J. Robot. Res. 31, 626–646 (2012)CrossRef Footnotes 1 http://​rep15.​lsts.​pt/​.   2 Note: acoustic communication was only used to track vehicle location.   3 Both the launch/recovery operation and charging times where approximate yet reasonable estimates.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_30 Implicitly Assisting Humans to Choose Good Grasps in Robot to Human Handovers Aaron Bestick¹  , Ruzena Bajcsy¹ and Anca D. Dragan¹ (1) Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, USA     Aaron Bestick Email: abestick@gmail.com Abstract We focus on selecting handover configurations that result in low human ergonomic cost not only at the time of handover, but also when the human is achieving a goal with the object after that handover. People take objects using whatever grasping configuration is most comfortable to them. When the human has a goal pose they’d like to place the object at, however, the most comfortable grasping configuration at the handover might be cumbersome overall, requiring regrasping or the use of an uncomfortable configuration to reach the goal. We enable robots to purposefully influence the choices available to the person when taking the object, implicitly helping the person avoid suboptimal solutions and account for the goal. We introduce a probabilistic model of how humans select grasping configurations, and use this model to optimize expected cost. We present results in simulation, as well as from a user study, showing that the robot successfully influences people’s grasping configurations for the better. 1 Introduction Handovers happen frequently in collaborative manipulation tasks. Be it when cooking a meal or assembling a device in a factory workcell, we need to pass objects to each other in order to work more effectively. As a result, making robot-to-human handovers seamless has been an area of growing importance in robotics research [1–10]. Imagine unloading the dishwasher with a robot. The robot comes to give you a mug so that you can place it in the cupboard. The way the robot presents you the mug (its position, orientation, and the grasp the robot is already occupying on the object) leads to you having a number of options for how to grasp it, some demanding more effort than others. In the end, the robot’s choice of grasp and the object’s pose in SE(3) affects how comfortable the handover is for you, as well as what you can easily do with the object after the handover: how easily you can just lay it down in the desired spot in the cupboard. Naturally, the robot can take this into account when planning its handover. Prior work has focused on selecting robot grasping configurations [2, 4, 6–8, 10] or object handover locations [1, 3, 5, 9] that maximize the number or range of feasible human grasps [2, 7, 8] or minimize human ergonomic cost [1, 3–6, 9, 10]. In contrast, our work enables the robot to minimize expected cost: our insight is that, although we can’t control the human’s grasp directly, we can model the probability that the human will select a particular grasping configuration. This probability distribution can then be used to evaluate the ergonomic cost to the person in expectation, accounting for what they are more or less likely to do (Fig. 1). [] Fig. 1. Setup & Summary. We focus on finding robot handover grasps and object transforms that encourage the human to select good grasps, especially when the human has a next goal for the object. We model how humans select grasping configurations, and leverage that model to minimize expected total ergonomic cost to the human. We propose to model the human as approximately-rational, selecting a grasping configuration with higher probability if its ergonomic cost is lower. Having such a model of how the human will select a grasp enables the robot to influence the human to select better grasps. In particular, we investigate two implications: Avoiding suboptimal choices for the human, but only when these choices are actually likely: The natural alternatives to having a model of how the user takes the object and minimizing expected cost are either (1) to maximize the total number of grasping configurations available to the user and give them the most flexibility [2], or (2) to minimize average cost to the person [4, 6, 10], without weighting the choices by the probability that the human will actually select them. Compared to the first, minimizing expected cost enables the robot to produce good configurations as opposed to many configurations. The second, minimizing average cost, also achieves that. However, it also tries to avoid allowing high-cost configurations, because these increase the cost mean. In contrast, in our approach, high-cost configurations do not actually matter, so long as low-cost configurations are available, because the human is very unlikely to select them. Instead, it is suboptimal yet low-cost configurations that are troublesome — these are the configurations that the human might select with high probability, due to the fact that they are not perfectly rational. Our formalism naturally eliminates such choices for the human to the extent possible, helping them select the better options. Encouraging the human to plan ahead: Usually when we receive an object, it is because we need to do something with it. There is some goal (or set of) goal pose(s) for the object. However, humans are not always very good at planning ahead: they might select a grasping configuration comfortable for taking the object, without thinking of how they will need to manipulate it afterwards. By modeling the human as approximately rational for the handover stage, but myopic to the next stage, we enable robots to minimize expected total cost to the human at both the handover and goal, accounting for this myopia. As a result, the robot avoids handing over an object in ways that allow for low-cost grasps which would have high cost at the goal: if a grasp looks tempting to the human locally, but would make it difficult for the human to satisfy the goal afterwards, then the robot will try to hand over the object in a way that makes such a grasp infeasible. We tested our approach in simulation and in a user study, suggesting that the robot can successfully influence people to take objects in a way that makes it easier for them to achieve their goal. 2 Related Work Our main contribution is to explicitly model the probability of the human choosing different available grasps during handover planning, enabling the robot to optimize for [$$expected $$] ergonomic cost. A secondary contribution of our work is accounting for the human’s goal in the context of minimizing ergonomic cost, enabling the robot to influence the person to select a better grasp. Table 1 categorizes related work along three axes: whether the method accounts for feasibility only or also for ergonomic cost, whether the method accounts for the human’s goal, and whether the method accounts for positions of the object only or also grasps. Table 1. Prior handover planning approaches +---------------+------------------+--------+----------------+--------------+ |   | Feasibility only | | Ergonomic cost | | +:==============+:=================+:=======+:===============+:=============+ | | H only | H + G | H only | H + G | +---------------+------------------+--------+----------------+--------------+ | Position only |   |   | [1, 3, 5, 9] |   | +---------------+------------------+--------+----------------+--------------+ | Grasp config. | [2] | [7, 8] | [4, 6, 10] | (This paper) | +---------------+------------------+--------+----------------+--------------+ 3 Technical Approach Notation. To choose a handoff configuration, we must select the robot’s grasp on the object [$$g\_R$$] and the object pose with respect to the world frame [$$T^{\text {hand}}\_{OW}$$] at which the human will take the object. The object to be handed off allows the human to grasp it at some set of poses [$$G\_H \subset SE(3)$$], which we represent as a Task Space Region [11], and discretize to give a finite set of feasible human grasps, so [$$G\_H \triangleq \{g\_{H1},...,g\_{Hk}\}$$]. Given a handoff grasp and object pose, [$$(g\_R, T^{\text {hand}}\_{OW})$$], each possible human grasp [$$g\_{Hi}$$] will be reachable with zero or more inverse kinematics (IK) solutions, which we collect into a set [$$Q^{\text {hand}}\_{g\_{Hi}}$$]. The union of these sets [$$Q^{{\text {hand}}}\_{(g\_R, T^{\text {hand}}\_{OW})} \triangleq \bigcup Q^{{\text {hand}}}\_{g\_{Hi}}$$] gives all the available “taking” configurations available to the human given the robot’s choice of [$$(g\_R, T^{\text {hand}}\_{OW})$$]. A human grasp [$$g\_{Hi}$$] also induces IK solutions at the object’s goal pose, [$$T^{goal}\_{OW}$$], which we collect in a set [$$Q^{\text {goal}}\_{g\_{Hi}}$$]. Human Grasp Selection Model. Among possible options, we chose to model the human ergonomic cost as the distance from some ideal nominal resting configuration [$$q^\*$$] w.r.t. some metric w: [$$\begin{aligned} \begin{aligned} C(q) \triangleq || \text {diag}(w) (q - q^\*) || \end{aligned}\end{aligned}$$] (1) While we chose this cost function for its simplicity, it would be easy to substitute any other function which maps human limb configurations to ergonomic costs. We would expect superior performance when using cost functions which more accurately capture the human’s preferences. We model the human as approximately-rational, selecting a grasping configuration q at handover time with higher probability when it has lower cost: [$$\begin{aligned} \begin{aligned} P(q) \propto e^{-\lambda C(q) } \end{aligned}\end{aligned}$$] (2) P(q) at the time of handover is normalized over all possible grasping configurations [$$Q^{{\text {hand}}}\_{(g\_R, T^{\text {hand}}\_{OW})}$$]. We can also compute the probability of a grasping configuration given a particular grasp,[$$P(q^{{\text {hand}}}|g\_{H})$$], by normalizing over [$$Q^{{\text {hand}}}\_{g\_{H}}$$], and [$$P(q^{{\text {goal}}}|g\_{H})$$] at the goal by normalizing over [$$Q^{{\text {goal}}}\_{g\_{H}}$$]. Finally, we can compute the probability of a human grasp by summing over all the IK solutions at that grasp: [$$P(g\_{H})=\sum \_{q\in Q^{{\text {hand}}}\_{g\_{H}}}P(q)$$]. Optimization. When the human does not have a (known) goal, we optimize for expected cost at the handover time: [$$\begin{aligned} \begin{aligned} \min \_{g\_R, T^{\text {hand}}\_{OW}} \sum \_{q\in Q^{{\text {hand}}}\_{(g\_R, T^{\text {hand}}\_{OW})}} P(q)C(q) \end{aligned}\end{aligned}$$] (3) When the human does have a goal, we optimize for expected total cost. The expected cost at the goal is based on the probability of each grasp based on what happened at the handover, [$$P(g\_{H})$$], and the probability of each configuration given that grasp: [$$\begin{aligned} \begin{aligned} \min \_{g\_R, T^{\text {hand}}\_{OW}} \sum \_{g\_{H}} \left[ \sum \_{q^{\text {}}\in Q^{\text {hand}}\_{g\_{H}}} P(q^{\text {}}|g\_{H})P(g\_{H})C(q) + \sum \_{q^{\text {}}\in Q^{\text {goal}}\_{g\_{H}}} P(q^{\text {}}|g\_{H})P(g\_{H})C(q) \right] \end{aligned}\end{aligned}$$] (4) [] Fig. 2. Case Study w/o Goal - Increasing Great Choices, Reducing OK Choices, Disregarding Bad Choices. A comparison between maximizing the number of feasible human grasping configurations Q (top), minimizing the average ergonomic cost (middle), and minimizing expected ergonomic cost (bottom), for the case of a single handover without a known object goal pose. The columns show the most probable human configuration (left), the configuration with the largest contribution to the total cost (middle), and the full space of configurations (right). Our method increases the number of great choices and decreases the number of OK choices which the human might actually pick. It also keeps bad choices if needed, because they have a low probability of being selected anyway. 4 Case Studies We start with two case studies, highlighting the benefits of our approach: eliminating suboptimal yet tempting grasping configurations. Expected Cost at Handover Time. Figure 2 compares optimizing for feasibility, average cost, and expected cost, in a scenario where the PR2 robot is handing over a mug to a human. For each case, we take the robot grasp and object transform that arises from the optimization, and compute: (1) the human grasping configuration of minimum cost; (2) the most “risky” human grasping configuration, that is not high-cost cost enough to be easily discarded by the human; (3) all human grasping configurations available; and (4) the histogram of costs for these configurations. We find that maximizing the number of feasible options can be dangerous, because it might mean the expected cost is rather high, and the best configuration is not as good. Compared to minimizing average cost, we find that minimizing expected cost will allow more high-cost configurations because there is a very low probability for the human to pick them (marked “unimportant” on the histogram), but will allow fewer configurations that have good cost but not great (marked “problematic” on the histogram). These are configurations for which the probability is high enough that the human might pick them, but they are not as good as the best configurations. [] Fig. 3. Case Study w. Goal - Reducing Total Cost. A comparison between maximizing the number of feasible human grasping configurations Q at the handover and goal (top), minimizing the average ergonomic total cost (middle), and minimizing expected total cost (bottom). The columns show the most probable human configuration at handover time (left), and at the goal (center), along with a plot of cost for each available grasp to the human. Our method makes it such that the tempting configurations (low cost at handover) also have low cost at the goal. Experimental Insight 1: A robot that models human handover choices can make it more likely that the person will actually select a comfortable handover grasp. Expected Total Cost (Handover + Goal). Figure 3 compares the three approaches from above when accounting for the human goal. Feasibility here accounts for the number of feasible configurations at both the handover and the goal, average cost accounts for cost at the start and goal, and so does expected cost. For each case, we take the resulting robot grasping configuration and compute (1) the human grasping configuration of minimum handover cost, which is what the human will most likely choose if they are being myopic; (2) given this grasp, the configuration of minimum cost at the goal (assuming no regrasp); and (3) the expected cost at the handover and at the goal for each human grasp. We find that maximizing feasible options can lead to very poor options at the goal. Compared to minimizing average cost, we find that minimizing expected cost is better at eliminating grasps that have low cost at handover time but only allow for high cost at the goal. Experimental Insight 2: A robot that models human handover choices can make it more likely that the person will select a handover grasp that also allows for comfortably achieving the goal after the handover. 5 Simulation Study Our case studies used a single object and a single goal configuration. Here we expand to an experiment that manipulates both as factors. 5.1 Experimental Design Manipulated Factors. We manipulate three factors. The first is the metric we optimize, as in the case study: maximizing number of feasible options, minimizing average cost, or our metric, minimizing expected cost. The second is the object being handed over by the robot: a mug as before, a glass, a pitcher, and a plate, for a total of 4 objects. These objects have vastly different TSR choices. The third is the goal pose, for which we use 5 different poses. This leads to a total of 3(metrics) [$$\times $$] 4(objects) [$$\times $$] 5(goals) = 60 conditions. [] Fig. 4. Optimal Handover for Different Goal Poses. The different goal poses in our experiment lead to different optimal handover configurations for the robot, each selected to minimize expected total cost at the handover time and at that particular goal. The chart averages the expected total (handoff + goal) ergonomic costs for each of the three metrics. Dependent Measures. As in the case studies, we measure expected total cost. Hypothesis. Our metric is designed to optimize expected total cost (the dependent measure), so we already know it will perform the best. The question remains whether our metric will be better by a significant margin. Our hypothesis is that it will: Our metric will result in a significant improvement in expected cost compared to the baselines. 5.2 Analysis We ran an ANOVA with metric as a factor to test differences among the three metrics across objects and goal poses. We found a significant main effect, [$$F(2,58)=1031.07$$], [$$p<.0001$$]. A post-hoc analysis with Tukey HSD showed that all three metrics were significantly different from each other, with the average cost outperforming maximum feasibility ([$$p<.0001$$]) and our metric outperforming average cost ([$$p<.001$$]), in line with our hypothesis. Figure 4 shows how the robot’s grasping configuration changes as the goal pose for the human changes. The robot will present the mug so that the person grabs it by the top when it needs to be placed right side up, by the side when it needs to be placed upside down, etc. In line with our hypothesis, the expected cost was three times lower with our approach compared to the maximum feasibility baseline, and two times lower compared to the minimum cost baseline. Figure 5 shows how the robot’s grasping configuration changes, for a given goal pose, as the object changes. The robot holds the objects in different ways to ensure that the person can easily grasp them by the side and set them down vertically with ease. [] Fig. 5. Optimal Handover for Different Objects. The different objects in our experiment lead to different optimal handover configurations for the robot for a given goal. The chart averages the expected total (handoff + goal) ergonomic costs for each of the three metrics across objects. 6 User Study The previous sections tested our method in simulation, assuming users who act according to our model. Real people do not. We conducted a user study to test whether the simulation results generalize, and to explore whether users perceive the improvement brought about by our method. 6.1 Experimental Design Manipulated Factors. We manipulated three factors. We manipulated the metric the robot used to compute its handover configuration, using our metric based on the user model we proposed, [$$\min E[C]$$], and the maximum feasibility baseline [$$\min |Q|$$]. We used the mug as the handover object for this experiment, and manipulated the goal pose using 10 different poses. In these poses, the mug was placed upside down, upright, and to the side to ensure variance. Finally, we manipulated whether the user knows the goal (Fig. 6). We did this because we wanted to separate the two assumptions our method is making: that users select grasping configurations based on ergonomic cost, and that users are myopic or greedy in this selection, only accounting for ergonomic cost at handover time but not at the goal. Therefore, manipulating the user’s knowledge of the goal enables us to test not only how our method performs overall (in realistic situations in which users have a goal and are aware of it), but also whether our method is influencing the users’ grasp choice in the way we expected, assuming users are actually myopic (which in reality might or might not be the case). Altogether, this led to 2(metrics) [$$\times $$] 10(goals) [$$\times $$] 2(knowledge) = 40 conditions. Subject Allocation. We recruited 9 users (6 male, 3 female, ages 22–29). All of the factors were within-subjects, meaning each user experienced all conditions. We counterbalanced the order of the metrics to avoid order effects, and randomized the order of the goals. We split the experiment in two parts, the first in which the user did not know the goal, and the second in which they did: In Part 1, the robot handed the object to the person at each of the 20 optimal handover configurations (one for each metric and goal pose), but the user was not told the goal used by the planner. We instructed the user to take the object from the robot and immediately drop it in a box. This ensured that no notion of a goal pose would impact the subject’s choice of object grasp. This portion of the experiment evaluated the two algorithms’ ability to influence the subjects to select a particular grasp when the subject was not aware of a goal, i.e. when the myopic/greediness assumption holds. In Part 2, a pictoral marker was placed on a table next to the subject indicating the object’s goal pose during each handoff. The subject was told that two different algorithms, “Program 1” and “Program 2,” would be used during this part of the experiment. We conducted handovers at the same 20 configurations as before, but this time the subject was instructed to place the object at the indicated goal pose. We told the users before each handover which of Programs 1 and 2 was in use. This portion of the experiment evaluated the algorithm’s ability to influence people to select ergonomically optimal grasps even when they know the goal, i.e. they are not necessarily myopic. Furthermore, it enabled us to ask users to compare the two methods, seeing if their notion of comfort matches ours. If people are actually myopic about the goal when selecting a grasping configuration, then we expect results for this second part to match those from the first part. [] Fig. 6. User study setup. Dependent Measures. We used both objective and subjective measures. Objective: We annotated for each condition which of the 6 TSRs for the mug the person selected. From this, we computed expected cost over all IK solutions at the goal, for all grasps that were feasible at handover time (i.e. had feasible IK solutions), making 2 assumptions: (1) the person follows our ergonomic model, and (2) we know the human kinematics: [$$\begin{aligned} \text {OM1: }E[C(q^{goal})], \lambda = 10, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}\text { feas. at handover} \end{aligned}$$] (5) To alleviate bias in our results induced by the two assumptions, we introduce 3 additional metrics that break each assumption separately as well as both assumptions together: we break the first assumption by computing average cost (which is the expected cost using a uniform distribution, i.e. [$$\lambda =0$$]) instead of expected ergonomic cost, and we break the second assumption by allowing infeasible grasps that a person with different kinematics might have chosen: [$$\begin{aligned}&\text {OM2: }E[C(q^{goal})], \lambda = 0, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}\text { feas. at handover}\end{aligned}$$] (6) [$$\begin{aligned}&\text {OM3: }E[C(q^{goal})], \lambda = 10, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}\end{aligned}$$] (7) [$$\begin{aligned}&\text {OM4: }E[C(q^{goal})], \lambda = 0, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR} \end{aligned}$$] (8) We did not estimate cost at handover time, because we were specifically interested in whether the robot successfully influences users to select grasps that are good at the goal. Indeed, we might see lower handover time costs for the baseline condition because it restricts the users less. Subjective. After each complete experiment, the subject answered a series of 1-7 Likert-scale survey questions about which program they preferred, which program made their goal easier to accomplish, and which program inspired the most trust in the robot. These capture each subject’s subjective opinion about which metric was more effective at making interaction with the robot comfortable and effective. Hypotheses H1. IF humans are actually myopic when selecting grasping configurations (e.g. when they are not even aware of the goal), our method successfully influences them to select configurations with lower cost at the goal compared to the baseline. H2. Our method influences people to select configurations with lower cost at the goal compared to the baseline, even when they are aware of the goal. H3. People prefer to work and are more comfortable with a robot using our method compared to the baseline. Table 2. Estimated human ergonomic costs at goal (Part 1: Users not aware of goal) +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | Objective measure | [$$\min |Q|$$] | [$$\min E[C]$$] | +:===========================================================================================================================+:===============+:================+ | [$$E[C(q^{goal})], \lambda = 10, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}\text { feas. at handover}$$] | 12.43 | 6.02 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | [$$E[C(q^{goal})], \lambda = 0, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}\text { feas. at handover}$$] | 12.41 | 6.30 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | [$$E[C(q^{goal})], \lambda = 10, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}$$] | 12.18 | 11.26 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | [$$E[C(q^{goal})], \lambda = 0, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}$$] | 12.28 | 11.45 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ Table 3. Estimated human ergonomic costs at goal (Part 2: Users aware of the goal) +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | Objective measure | [$$\min |Q|$$] | [$$\min E[C]$$] | +:===========================================================================================================================+:===============+:================+ | [$$E[C(q^{goal})], \lambda = 10, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}\text { feas. at handover}$$] | 11.42 | 5.37 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | [$$E[C(q^{goal})], \lambda = 0, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}\text { feas. at handover}$$] | 11.52 | 5.61 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | [$$E[C(q^{goal})], \lambda = 10, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}$$] | 11.72 | 11.02 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ | [$$E[C(q^{goal})], \lambda = 0, \forall q\_{goal} \in \text {IK(g)}, \forall g\in \text {TSR}$$] | 11.84 | 11.23 | +----------------------------------------------------------------------------------------------------------------------------+----------------+-----------------+ 6.2 Analysis H1. We used results for part 1 of the study, when users are not aware of the goal, to test H1. We first computed Cronbach’s [$$\alpha $$] for the four objective measures, which was high at .9036. We thus computed an aggregate goal cost using all four measures. We then ran a repeated-measures factorial ANOVA on this aggregate, with goal and metric as factors. We found a significant main effect for metric, as expected ([$$F(1,179)=377.83$$], [$$p<.0001$$]), and a significant main effect for goal ([$$F(9,171)=26.79$$], [$$p<.0001$$]). However, there was also a significant interaction effect, and so we conducted a Tukey HSD post-hoc, comparing all pairs but compensating for multiple comparisons. The analysis revealed that the expected cost (our) metric resulted in significantly lower cost at the goal than the baseline for 7 out of the 10 goals, all with [$$p<.03$$]. This supports our hypothesis H1, but suggests that the benefit of our method does depend on the choice of the goal pose, with the maximum feasibility baseline being sufficient for some goals. Table 2 shows the goal ergonomic costs estimated by each of the four measures, averaged across all nine study participants for this part of the study. It shows that pose optimization with [$$\min E[C]$$] gives consistently lower ergonomic cost at the goal than optimization with [$$\min |Q|$$]. This difference is particularly marked for the first two measures, which consider only grasps feasible at the handover. These results suggest that expected ergonomic cost can be used to influence humans to choose grasps with good ergonomic properties even when they are completely unaware of the goal. H2. For part 2, when users were given specific goals, our objective measures again had high item reliability, Cronbach’s [$$\alpha =.8830$$]. We again computed an aggregate cost. We again ran a factorial repeated-measures ANOVA, and the results, as expected, were analogous to the results from part 1. We again saw significant main effects, but also a significant interaction between the factors. As before, a post-hoc with Tukey HSD corrections showed that 7 out of the 10 goals saw significantly lower costs at the goal with our method than with the baseline. The set of these 7 goals was almost identical to the one in part 1, with the exception of one goal no longer showing a significant difference, and one goal starting to show a significant difference. This supports out hypothesis H2: our method does not only help users improve performance when we force them to be myopic by not making them aware of the goal – it helps in realistic situations, when users have a goal that they are aware of. This suggests that people are indeed myopic in their selections of a grasp configuration. Table 3 shows the goal ergonomic costs estimated by each of the four objective metrics, averaged across all nine study participants for Part 2 of the study, where subjects were instructed to place the object on a pictoral marker at the goal pose after each handover. We see a similar improvement in ergonomic costs when minimizing E[C] versus maximizing |Q|. Here, we found it interesting that the costs dropped slightly across the board. This suggests that perhaps when people are aware of the goal they perform slightly better, but that still our method can significantly help them to further improve their performance. H3. Table 4 summarizes users’ subjective ratings. t-tests showed that our method outperformed the baseline in user overall preference, how helpful they thought the robot was, how much they trusted the robot, and how easy it was to do the task. Users thought that the robot understood their goal and that it handed them objects in a way that made their task easier. Table 4. Post-study survey results +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ | Statement | [$$\min |Q|$$] | [$$\min E[C]$$] | t(8) | p | +:=================================================================================+:===============+:================+:=====+:===============+ | “I prefer Program” | 2.0 | 6.2 | 9.73 | [$${<}.0001$$] | +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ | “The robot was helpful when running Program” | 3.4 | 6.4 | 6.80 | [$${<}.0001$$] | +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ | “I trust the robot running Program” | 3.7 | 6.1 | 4.4 | [$${<}.01$$] | +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ | “The robot understood my goal when running Program” | 2.8 | 6.4 | 5.33 | [$${<}.001$$] | +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ | “It was physically easy to do the task when the robot was running Program” | 2.8 | 6.2 | 6.50 | [$${<}.001$$] | +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ | “The robot running Program handed me objects in a way that made the task easier” | 2.0 | 6.3 | 9.19 | [$${<}.0001$$] | +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ | “If you had to choose a program you prefer, which would it be?” | 0% | 100% | - | - | +----------------------------------------------------------------------------------+----------------+-----------------+------+----------------+ The users’ comments were particularly enlightening (here Program 1 refers to the baseline and Program 2 refers to our method): “With Program 2, I could move straight from grip to the target with a natural motion. With Program 1, I would sometimes have to contort my arm unnaturally to place the mug correctly.”; “Program 1 made it easier to pick up objects but harder to achieve the goal. Program 2 sometimes made it more difficult to pick up objects but achieving the goal was easier.”; “Program 1 is an a\*\*hole.” 7 Discussion Summary. We introduced a model of how people take an object from the robot, and used it to select robot actions that lead to better outcomes for the person. Especially when the person has a goal for the object after the handover, but they are myopic or greedy in their selection of their grasp and do not account for the goal, we have shown that the robot can influence the person’s grasp to help them achieve better comfort across the task – at the handover time, but also at the goal time. Limitations and Future Work. Our work is limited in many ways. We optimize for total ergonomic cost to the person, but it is not clear what this ergonomic cost should be, and it will likely differ from human to human. Furthermore, our study did not measure exactly the ergonomic cost at the goal. Although subjective results align well with objective estimates, future work could verify this by instrumenting the person and the object. Conclusion. We are encouraged to see robots able to use their actions to make it more likely that people find good solutions for their task. We are excited to explore further applications beyond handovers, to human plans more broadly. Acknowledgements We gratefully acknowledge Dylan Hadfield-Menell for his many helpful insights. This work was supported by NSF NRI #1427260. References 1. Bestick, A., Burden, S., Willits, G., Naikal, N., Sastry, S.S., Bajcsy, R.: Personalized kinematics for human-robot collaborative manipulation. In: IEEE International Conference on Intelligent Robots and Systems (2015) 2. Cakmak, M., Srinivasa, S.S., Lee, M.K., Forlizzi, J., Kiesler, S.: Human preferences for robot-human hand-over configurations. In: IEEE International Conference on Intelligent Robots and Systems, pp. 1986–1993. IEEE (2011) 3. Huang, C.-M., Cakmak, M., Mutlu, B.: Adaptive coordination strategies for human-robot handovers. In: Robotics: Science and Systems (RSS) (2011) 4. Kim, J., Park, J., Hwang, Y., Lee, M.: Advanced grasp planning for handover operation between human, robot: three handover methods in esteem etiquettes using dual arms and hands of home-service robot. In: 2nd International Conference on Autonomous Robots and Agents, pp. 34–39 (2004) 5. Mainprice, J., Sisbot, E.A., Siméon, T., Alami, R.: Planning safe and legible hand-over motions for human-robot interaction, vol. 2, p. 7 (2010) 6. Micelli, V., Strabala, K., Srinivasa, S.: Perception and control challenges for effective human-robot handoffs. In: RSS 2011 RGB-D Workshop (2011) 7. Quispe, A.H., Amor, H.B., Stilman, M.: Handover planning for every occasion. In: IEEE-RSJ International Conference on Humanoid Robots (2014) 8. Quispe, A.H., Ben Amor, H., Christensen, H., Stilman, M.: It takes two hands to clap: towards handovers in bimanual manipulation planning. In: Robotics: Science and Systems (RSS) (2014) 9. Sisbot, E., Alami, R.: A human-aware manipulation planner. IEEE Trans. Robot. 28, 1045–1057 (2012)CrossRef 10. Strabala, K., Lee, M.K., Dragan, A., Forlizzi, J., Srinavasa, S.S., Cakmak, M., Micelli, V.: Towards seamless human-robot handovers. J. Hum. Robot Interact. 1(1), 112–132 (2012) 11. Berenson, D., Srinivasa, S., Kuffner, J.: Task space regions: a framework for pose-constrained manipulation planning. Int. J. Robot. Res. 30(12), 1435–1460 (2011)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_31 Initial Data and Theory for a High Specific-Power Ankle Exoskeleton Device Sebastian Sovero^(1, 2  ), Nihar Talele^(1, 2  ), Collin Smith^(1, 2  ), Nicholas Cox^(1, 2  ), Tim Swift^(1, 2  ) and Katie Byl^(1, 2  ) (1) OtherLab Orthotics, San Francisco, California, USA (2) Robotics Laboratory, University of California at Santa Barbara (UCSB), Santa Barbara, USA     Sebastian Sovero Email: sesovero@gmail.com   Nihar Talele Email: nihar.talele@gmail.com   Collin Smith Email: collin@otherlab.com   Nicholas Cox Email: njcox@otherlab.com   Tim Swift Email: tim@otherlab.com   Katie Byl (Corresponding author) Email: katiebyl@gmail.com Abstract We present experimental data for an ankle exoskeleton that provides a metabolic benefit during running. Intuitively, there is an optimal level of power that any particular human can accept and use to benefit walking or running, which is a function of the particular human, the selected gait, and speed. We provide and discuss modeling optimization results to complement our recent data for the device, toward modifying future designs and understanding theoretical performance limits. This work is funded in part through an NSF CAREER Award (CMMI 1255018). 1 Introduction Exoskeletons have been a major thrust of robotics research for nearly two decades, with the goal of assisting an operator during a variety everyday tasks. However, despite this potential, conventional exoskeleton designs such as HULC [1], and XOS2 [2] have proven unable to assist with highly dynamic human behaviors like running, and in many cases walking. As a result, rather than reducing required human effort, these devices turn into expensive, heavy exercise machines, since they increase the metabolic burden associated with movement, which has emerged as a primary metric for performance augmentation applications [3]. Recent work, such as the work under DARPA’s Warrior Web program, has made significant strides in this area by advancing a new class of lightweight hardware [4], but even these devices have failed to fully capitalize on the promise of exoskeletons. Despite significant research efforts, there is only one powered mobile device which was developed at MIT [5] that has demonstrated metabolic assistance in a non-stationary task, and this was for walking. While this is a significant result, arguably the most valuable output from this work is the notion of an “augmentation factor” equation, which predicts the metabolic benefit of an exoskeleton before testing. While this equation may not fully predict the full dynamics of the system, it does provide a vocabulary for comparing the burden of mass and the benefit of added power. In this work, we build off the structure of the augmentation factor equation to evaluate how its major components scale as parameters of human gait speed and device power input vary. 2 Technical Approach The augmentation factor equation introduced in [5], shown below, estimates the metabolic benefit an exoskeleton provides to a human during locomotion, due to the combined effects of Augmentation Power (AP) from both added power and dissipative effects and of Power to Carry (PC) due to the location-dependent effects of the added mass of a device that the human user must carry. [$$\begin{aligned} AF=\underbrace{\frac{p^+-p\_{dist}}{\eta ^+} }\_{AP}- \underbrace{\sum \_i \beta \_i m\_i}\_{PC} \end{aligned}$$] (1) At the center of our work here is a realization that the two principal components (mass and power) of the augmentation factor equation are only understood in a very limited context. Specifically, the existing data that are used to capture the metabolic burden of added mass are only accurate for walking at 1.25 m/s, although it seems more intuitive to assume the effect should be velocity-dependent (i.e., [$$\beta \_i$$] = [$$\beta \_i(v)$$]). Similarly, the metabolic benefit provided by the added power is represented as linear across all powers and at all forward speeds. Toward better understanding mass and power effects of a lower-limb exoskeleton, we structured a set of modeling optimizations along with two human subject studies to capture the fundamental aspects of these principal components, for use across a wider variety of operating cases. 2.1 Simulations Recent works using local optimization methods (e.g., Sequential Quadratic Programming or Interior Point methods) have produced a variety of compelling results in the field of legged robotics, particularly toward developing and/or investigating energy efficient gaits [6–8]. These methods have also been demonstrated to translate well to real physical systems, subject to a variety of constraints and objective functions, as demonstrated by multiple teams within the recent DARPA Robotics Challenge (DRC) [9–11]. Here, we use optimization of a seven-link planar biped model to investigate the effects of added mass across a range of walking speeds and for different locations of added mass on each leg. 2.2 Added Mass Study For the burden of added mass, we had a pilot study of 10 unimpaired subjects (8 male, 2 female, 26.1 ± 3.1 years, weight 77.2 ± 11.9 kg, height 1.78 ± .07 m) to evaluate the metabolic burden of added mass throughout walking (1.25 m/s and 1.75 m/s) and running speeds (2.25 m/s and 2.75 m/s). We tested a variety of masses ranging from .45 kg to 3.18 kg on various locations. The masses were placed bilaterally above and below each leg joint (ankle, knee, hip) to simulate the weight of a exoskeleton actuator. The motivation was to build a data set for an exoskeleton designer so that actuator placement and design can be optimized. 2.3 Added Power Study For the benefit of added power, we studied three subjects with varying power levels at running and walking speeds. The exoskeleton (Fig. 1) delivers high peak powers at the ankle. Our preliminary data corresponds to an ankle joint muscle-tendon “apparent efficiency” [12] of [$$\eta ^+\_{ankle}=.30$$] (mechanical watt per metabolic watt) at 2.8 m/s running for the lower power range, where [$$\eta ^+\_{ankle} = \frac{Average \;exoskeleton \;positive \;mechanical \;power}{\varDelta \, Net \;human \;metabolic \;power}$$]. [] Fig. 1. The Otherlab exoskeleton utilizes cloth pneumatic actuators for toe off. This novel cloth actuator functions equivalently to an pneumatic expansion cylinder, but with much lower added weight. With regard to assistive power, Mooney et al. suggest power has a linear effect on the metabolic burden; however, a simple thought experiment leads to the hypothesis that there is a point where you begin to see diminishing returns from adding power to the operator. For a given design, extremely low levels of assistance power result in a metabolic burden because not enough power is put in to overcome the mass of the device. Similarly, if too much power is introduced to the leg, exoskeleton “assistance” likely disrupts an operator’s natural biomechanics. This may create a metabolic burden at high levels of power. In between, there is an ideal amount of added mechanical power that the operator can accept and leverage without disrupting their biomechanics. This hypothesized shape is depicted by the green dashed line in Fig. 11. We suspect that such diminishing returns have not yet been observed because all published beneficial exoskeletons have output less than 30 W at the ankle [12]. [] Fig. 2. 7-link walker model 3 Results 3.1 Simulations For the simulations, the seven-link planar walker shown in Fig. 2 was constructed using mass, inertia and length parameters taken from [13, p. 302] so that the model resembles a 74.2 kg human. The walker has 2 actuators at the hips, 2 at knees and 2 at ankles. We used a partial feedback linearization (PFL) based controller and designed trajectories using [$$4^{th}$$]-order polynomials to generate a walking motion. Parameters for the polynomials were then optimized to minimize the cost of transport. We used the fmincon solver with the interior point algorithm in Matlab for these optimizations. Cost of Transport (CoT) was calculated by adding the absolute values of both positive and negative work at the joints and always dividing by the mass of the unburdened biped model. Figures 3 and 4 show the effects of adding increasing amounts of mass at each limb at either the lower shank of the leg (i.e., at the ankle) or the lower portion of the thigh (just above the knee), with walking speed constrained at 1.3 (m/s), while Figs. 5 and 6 show how CoT varies as a function of speed. Here, the simulation is constrained to a walking (i.e., non-running) gait. [] Fig. 3. Cost of transport(CoT) versus total mass added bilaterally to the lower shank. These simulation data were taken for walking at 1.3 m/s. [] Fig. 4. Cost of transport(CoT) versus total mass added bilaterally to the lower thigh. These simulation data were taken for walking at 1.3 m/s. [] Fig. 5. Variation of Cost of Transport (CoT) due to change in walking speed with and without load. A total load of 2.7 kg was added bilaterally to lower shank for each simulation optimization data point. [] Fig. 6. Variation of Cost of transport(CoT) due to change in walking speed with and without load. A total load of 4.5 kg was added bilaterally to lower thigh for each simulation optimization data point. 3.2 Added Mass Study Figures 7 and 8 show human subject data when the total mass shown on the x-axis is distributed bilaterally at either the lower shank (just above the ankle) or lower thigh, respectively. Figures 9 and 10 show how human energy consumption varies as a function of locomotion speed, both with and without added mass. [] Fig. 7. Experimental data for mass added to the lower shank. The energy is normalized for the subject walking at 1 m/s. This data is for subjects walking at 1.25 m/s. [] Fig. 8. Experimental data for mass added to the lower thigh. The energy is normalized for the subject walking at 1 m/s. This data is for subjects walking at 1.25 m/s. [] Fig. 9. Cost of transport for strapping a total mass of 2.7 kg bilaterally to the lower shank. Notice that the burden seems to grow with forward speed. The subjects were asked to walk at 1.25 and 1.75 m/s, and only run at 2.0 and 2.75 m/s. [] Fig. 10. Cost of transport for strapping a total mass 4.5 kg bilaterally to the lower thigh. Notice that the burden seems to grow with forward speed. The subjects were asked to walk at 1.25 and 1.75 m/s, and only run at 2.0 and 2.75 m/s. Note that for both these experimental data and the corresponding simulation optimizations (Figs. 3 through 6), adding mass at the more distal location is both more costly and more velocity-dependent (slope of data), and that the effects of varying velocity are approximately linear across walking speeds, as theoretically anticipated. At the higher two velocities tested in both Figs. 9 and 10, subjects were performing a running gait. Simulating analogous results for running remains a task for future work. While the biped simulations lack much of the details of a true human – for example neglecting upper limb motion, simulating foot contact with a rolling arc, and constraining motion to the sagittal plane – they arguable capture and support general trends within the human subject data. For example, adding mass at the lower thigh (vs the ankle) requires surprisingly little additional energy during locomotion. In fact, if one divides by the total system mass (including the human plus added mass) in simulation, COT actually goes down as mass is added, indicating humans would walk more efficiently with a different mass distribution, increasing mass near the knees. Also, human effort increases with a significant slope as walking speed approaches the preferred walk-to-run transition speed (Figs. 9 and 10) but is comparatively flat for different running speeds, as previously noted in the literature [14, Fig. 9.3]. Finally, if one assumes a muscle efficiency of around 25% [15], we would expect a human would require on the order of four (i.e., 1 / 0.25) times the energy predicted in our simulations. This is in general agreement with a slope of 0.011 in Fig. 7 vs 0.0027 in Fig. 3. 3.3 Added Power Study Our aim throughout is to understand the effects of two aspects of exoskeleton design: added power and added mass. A key insight is that high specific power, i.e., a high power to weight ratio, is more essential than simply providing more power alone, due in particular to the burden of added mass near the ankle. In this section, we present initial data to examine the effects of added power. Below are results from human testing performed at Otherlab. Data in Fig. 11 documents the first demonstration of metabolic benefit from an exoskeleton during running gaits. Although these data comprise a limited set of trials, the results are very promising. [] Fig. 11. Individual responses to positive mechanical power supplied by (and with the added mass burden of) the exoskeleton. On average, subjects used 1100 W to run unassisted at 2.8 m/s. Data were taken from subjects running at 2.8 m/s. There were 9 tests distributed across 3 subjects (S1,S2,S3). A least squares interpolation yields [$$y=3.34x-75.9$$] with [$$R^2$$] = 0.84, suggesting this may be the first exoskeleton to produce a metabolic benefit while running. The green dashed line qualitatively depicts diminishing returns we suspect will occur at unknown higher power levels. 4 Conclusions Our recent experimental data indicate that we have made the first metabolically beneficial powered running exoskeleton. Coupled with additional simulation data and added mass human studies, we have provided initial clues to explain why our device succeeds where so many others have failed. In particular, both simulations and human subject data show the increased impact of adding mass distally, near the ankle and foot. In contrast, the data also show a surprisingly low burden associated with a location just above the knee. These findings deserve further study and may significantly influence design of future exoskeletons, particularly when power must be carried on board (Fig. 12). [] Fig. 12. This cartoon figure illustrates a comparison of two devices of the same mass, but different power capacities. A higher specific power can deliver more power for the same mass, resulting in usefulness over a larger range of speeds. In contrast, a electromechanical (EM) device can only offset it’s mass burden at slow walking speeds. The corner in the green curve comes from the power saturation. At this point, mass burden grows, but power supplied by the device remains static. Our studies have been designed to expand and refine the augmentation factor equation. Future revisions to the equation building upon this work should enable designers to quantitatively balance the power and mass of an exoskeleton more effectively. This understanding will allow exoskeleton designers to optimize performance more effectively, minimizing the arduous prototype and test cycle. The central dilemma to an exoskeleton mechanical design is weighing the added power and the burden of added weight. This dilemma is captured succinctly with one attribute: the specific power of the actuation architecture being used. To demonstrate this, consider an example design similar to that in Mooney et al. [5] that is a single ankle design sized to provide around 25 W of positive mechanical power to maximize benefit for a 1.5 m/s walk. This single design necessarily has a fixed mass across all velocities. This design creates two limits: at velocities less than the design velocity (1.5 m/s) the augmentation factor is limited by how much power the operator can accept, while at higher velocities the augmentation factor is limited by the peak power of the actuator. These two limits result in a peak achievable augmentation factor and a defined range of velocities where the device can provide metabolic neutrality or better. In contrast, a high specific power alternative with a comparable mass but significantly increased peak power capacity can greatly increase the available augmentation capability. As a result, the recent push towards lightweight exoskeletons has been somewhat misguided. Achieving high augmentation factors across a wide range of velocities cannot be done solely by focusing on reducing weight of a design; a more essential aspect is to provide actuation with higher specific powers – more power with less weight. References 1. Martin, L.: HULC. http://​www.​lockheedmartin.​com/​us/​products/​exoskeleton/​hulc.​html 2. Raytheon: XOS2. http://​www.​army-technology.​com/​projects/​raytheon-xos-2-exoskeleton-us/​ 3. Amundson, K.: Human exoskeleton control and energetics. Ph.D. Dissertation, UC Berkeley, Berkeley, CA, USA (2007) 4. Wehner, M., Quinlivan, B., Aubin, P.M., Martinez-Villalpando, E., Baumann, M., Stirling, L., Holt, K., Wood, R., Walsh, C.: A lightweight soft exosuit for gait assistance. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 3362–3369. IEEE (2013) 5. Mooney, L.M., Rouse, E.J., Herr, H.M.: Autonomous exoskeleton reduces metabolic cost of human walking during load carriage. J. Neuroengineering Rehabil. 11(1), 1 (2014)CrossRef 6. Righetti, L., Buchli, J., Mistry, M., Kalakrishnan, M., Schaal, S.: Optimal distribution of contact forces with inverse-dynamics control. Int. J. Robot. Res. 32(3), 280–298 (2013)CrossRef 7. Mordatch, I., Wang, J.M., Todorov, E., Koltun, V.: Animating human lower limbs using contact-invariant optimization. ACM Trans. Graph. (TOG) 32(6), 203 (2013)CrossRef 8. Posa, M., Cantu, C., Tedrake, R.: A direct method for trajectory optimization of rigid bodies through contact. Int. J. Robot. Res. 33(1), 69–81 (2014)CrossRef 9. Karumanchi, S., Edelberg, K., Baldwin, I., Nash, J., Satzinger, B., Reid, J., Bergh, C., Lau, C., Leichty, J., Carpenter, K., Shekels, M., Gildner, M., Newill-Smith, D., Carlton, J., Koehler, J., Dobreva, T., Frost, M., Hebert, P., Borders, J., Ma, J., Douillard, B., Shankar, K., Byl, K., Burdick, J.W., Backes, P., Kennedy, B.: Team robosimian: semi-autonomous mobile manipulation at the 2015 darpa robotics challenge finals. J. Field Robot. (2016). Special Issue on the 2015 DRC Finals 10. Liu, C., Atkeson, C.G., Feng, S., Xinjilefu, X.: Full-body motion planning and control for the car egress task of the darpa robotics challenge. In: IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pp. 527–532. IEEE (2015) 11. Kuindersma, S., Deits, R., Fallon, M., Valenzuela, A., Dai, H., Permenter, F., Koolen, T., Marion, P., Tedrake, R.: Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot. Auton. Robots 40(3), 429–455 (2016)CrossRef 12. Sawicki, G.S., Ferris, D.P.: Mechanics and energetics of incline walking with robotic ankle exoskeletons. J. Exp. Biol. 212(1), 32–41 (2009)CrossRef 13. Tözeren, A.: Human Body Dynamics: Classical Mechanics and Human Movement. Springer Science & Business Media, New York (1999) 14. Biewener, A.A.: Animal Locomotion. Oxford University Press, Oxford (2003) 15. Farris, D.J., Sawicki, G.S.: The mechanics and energetics of human walking and running: a joint level perspective. J. Roy. Soc. Interface (2011). doi:10.​1098/​rsif.​2011.​0182 Mobile Robots 1 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_32 High-Speed Wall-Contacting Drive for Underground Automatic Transport Vehicle Feasibility Study of Proposed Cruise Control Framework Hiroyuki Karasawa¹  , Takuro Okubo¹  , Rui Fukui¹  , Masayuki Nakao¹   and Yuichi Kodama²   (1) Department of Mechanical Engineering, School of Engineering, University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-8656, Japan (2) Komatsu Ltd., Akasaka 2-3-6, Minato-ku, Tokyo 107-8414, Japan     Hiroyuki Karasawa (Corresponding author) Email: karasawa@hnl.t.u-tokyo.ac.jp URL: http://www.ra-laboratory.com/r/research/research-e.html   Takuro Okubo Email: okubo@hnl.t.u-tokyo.ac.jp URL: http://www.ra-laboratory.com/r/research/research-e.html   Rui Fukui Email: fukui@hnl.t.u-tokyo.ac.jp URL: http://www.ra-laboratory.com/r/research/research-e.html   Masayuki Nakao Email: nakao@hnl.t.u-tokyo.ac.jp URL: http://www.ra-laboratory.com/r/research/research-e.html   Yuichi Kodama Email: yuuichi\_kodama@komatsu.co.jp URL: http://www.ra-laboratory.com/r/research/research-e.html Abstract To increase the speed of automatic transport vehicles in underground narrow pathways, we have developed a differential four-wheel vehicle that can keep contact with the wall using roller bumpers. In wall-contacting driving, the vehicle may be damaged by the collision with convexity and concavity of the wall. In this research, a preliminary experiment highlights what kind of convexity and concavity greatly affect the vehicle. Based on those results, this paper proposes a convexity and concavity detection method using geometric feature extraction of wall roughness. To evaluate the performance of the method, experiments have been conducted by using a scale model. The experimental results clarify the feasibility of the detection and the collision avoidance of wall convexity and concavity using the proposed feature values extracted from the distance sensor data. Keywords Field roboticsWheel mobile robotAutomatic transportationCollision avoidance 1 Motivation, Problem Statement, Related Work In underground mining sites, automation of physical work is necessary for operators’ safety and to avoid human operation in hazardous environment [1]. Cruising speed increase of transport vehicle is also required to improve productivity even in narrow and complex pathways (Fig. 1) [2]. High-speed automatic cruise for indoor robots using path tracking is a popular method [3]. However, this method is not applicable to underground mines because the working area of an underground mine is expanding constantly and the maintenance of path requires unacceptable efforts. Some studies use laser range finder (LRF) for self-localization of high-speed automatic cruise [4–6]. Because the self-localization contains substantial errors, a vehicle needs to slow down while driving in a narrow curve [2]. To keep the cruising speed high even in a curve, we propose a differential four-wheel vehicle that can keep contact with the wall using roller bumpers (Fig. 2). The wall-contacting drive may cause a collision with convexity and concavity of the wall. For safety and high-speed cruising, it is necessary to avoid impulsive contact with the convexity and concavity. Some studies, regarding passenger cars, detect and avoid obstacles using radar or/and camera [7–9]. However in underground mines, bad illumination condition and existence of dust prevent from using such devices. In this research, we develop a cruise control method realizing both high-speed wall-contacting drive and avoidance of impulsive contact with the convexity and concavity of wall in underground narrow pathways. [] Fig. 1. Pathways of underground mine. [] Fig. 2. Conceptual sketch of the vehicle with wall-contacting roller bumpers. 2 Technical Approach First a basic experiment, using a scale model, illustrates how the impulsive contact with the convexity and concavity of wall influences the vehicle. Second, we propose a cruise control method to realize both high-speed wall-contacting drive and avoidance of impulsive contact. This method includes two different-level controls; a global (low speed) control based on self-localization and a local (high speed) control to avoid the convexity and concavity of wall (Fig. 3). The global control modifies the output velocity of the local control. The basic approaches of the cruise control are as follows; (1) the vehicle run at the center of the pathway based on the self-localization using LRF in straight pathway, (2) the vehicle runs at the same speed using wall-contacting drive in a curve. While driving with wall-contacting method, a uniaxial fast-response distance sensor identifies the convexity and concavity of wall. If the sensor detects a large convex or concave shape, the vehicle try to avoid it. After the vehicle avoids an obstacle, the vehicle modifies its direction, runs straight for a while and returns to wall-contacting drive. Basically, the vehicle can recognize whether it runs in straight pathway or a curve using self-localization and can avoid convexity or concavity even if it runs in straight pathway. This study deals with the detection problem of the convexity and concavity of the wall as a classification problem of various wall geometric states. States of the wall are described as the combination of time-series values measured by the distance sensor. [] Fig. 3. Cruise control framework comprising global and local controls. Multiple feature values are extracted to increase robustness of classification against the size of the wall roughness and the vehicle speed. To realize fast-response avoidance action, calculation for feature extraction must be executed on board. An environmentally-resistant microcontroller with poor calculating power is used as on-board computer. To reduce the amount of calculation, a Nearest-Neighbor-based algorithm classifies the wall states. Centroids of clusters generated by k-means algorithm are used as prototypes of the classifier. 3 Results A one-tenth scale model of a transport vehicle (Fig. 4) is used to conduct experiments. Results show the feasibility of the high-speed cruising using wall-contacting drive in narrow pathways. The outcomes clarify how the roughness of wall affects the vehicle. In particular, the contact with the convexity affects the vehicle more seriously than those with any other types of wall roughness (Fig. 5). The convexity and concavity detection method using geometric features has been developed. A uniaxial fast-response distance sensor is installed in the vehicle to measure the wall roughness (Fig. 4(b)). Multiple geometric feature values as shown in Fig. 6 enable robust detection in various driving conditions. Once detecting convexity or concavity, the vehicle stops inner wheel rotation and changes its direction for avoiding. Self-localization and wall roughness detection are integrated into a cruising control method. Experimental results demonstrate the feasibility of detection and collision avoidance of wall convexity and concavity (Fig. 7). [] Fig. 4. One-tenth scale model of a transport vehicle. [] Fig. 5. Acceleration when the vehicle collides with large convexity. [] Fig. 6. Conceptual sketch of the feature values. [] Fig. 7. Sequence photographs of the collision avoidance action. 4 Scheduled Experiments Experiments was conducted using the scale model and wooden driving course as shown in Fig. 8(a). Curve radius (700, 1200, 1700 mm) and wall convexity and concavity change as Fig. 9 shows. A motion capture system (NaturalPoint, Inc., Prime17W, Fig. 8(b)) is used to eliminate the effect of random noises caused by a LRF-based self-localization. This configuration enables pure evaluation of the local control. Figure 10 shows an example of the vehicle trajectory measured by the motion capture. A uniaxial laser distance sensor (Panasonic Corp., HL-G125-A-C5, Fig. 4(b)) measures the wall roughness. 30 feature values are designed by reference to feature values to recognize hand gestures using wrist contour measuring [10]. Figure 6 shows the selected three feature values; (1) maximum continuous increase, (2) kurtosis, (3) ratio of half area. They have high ratio of inter-class variance to intra-class variance. If the vehicle observes the convexity or concavity, distance increases/decreases drastically and sensor value also increases/decreases then features (1) and (3) can extract this phenomenon. Feature (2) can express the magnitude of distance change. Two different conditions are compared; (a) three feature values are used, (b) only one feature value (Feature (1)) is used. [] Fig. 8. Driving course setup. [] Fig. 9. Specification of convexity and concavity. [] Fig. 10. The vehicle trajectory. [] Fig. 11. An example of sequential classification results. In the k-means method, k values: the class number of wall roughness states, is configured to ten. This number is the minimum required to distinguish between convexity and curve. It means that the microcontroller needs to compare the current feature values with ten prototypes. As a result, fast control frequency (800 Hz) is achieved on the microcontroller (Atmel Corp., AT91SAM3X8). Figure 11 is an example of sequential classification results. Blue line expresses the distance measured by the distance sensor. Red and green lines symbolize classification results using three feature values and one feature value respectively. The distance value starts decreasing when the vehicle get close to the curve. In contrast, the distance value increases drastically when the vehicle approaches the convexity. Then, the convexity is obviously detected. 5 Main Experimental Insights Table 1 shows that the classifier misses small concavity using only one feature value. The experiments on each condition are conducted five times. Figure 12 indicates that the classifier mistook entrance of curve for convexity. Table 1 also indicates that detection performance varies depending on the radius of curve. Especially when the radius is small, it is very difficult to distinguish between the entrance/exit of curve and the small convexity just by using one feature value. Kurtosis and ratio of half area can take into account temporal information. They realize robust detection in various vehicle speed and curve radius. Table 1. Accuracy rate of detection of the wall convexity and concavity (n = 5). [] [] Fig. 12. Experimental result at 5 km/h. Figure 13 shows the experimental result of collision avoidance, and describes the maximum collision-free cruising speed at each condition. Compared with concavity, convexity is more difficult to avoid. This is because convexity demands larger avoidance action than concavity does as shown in Fig. 14. It is also difficult to avoid collisions when the curve radius is small because the distance from the measuring spot to the vehicle is too short. Additional distance sensor may enlarge the time for avoidance and improve the avoiding performance. [] Fig. 13. Maximum collision-free cruising speed (n = 5). [] Fig. 14. The difference of required avoidance actions for convexity and convex. References 1. MacNeil, P.: International mining fatality database. Department of Primary Industries (2008) 2. Gustafson, A.: Automation of load haul dump machines. Lulea takniska universitet, Doctoral thesis (2011) 3. Ohkawa, S., Takita, Y., Date, H.: Development of autonomous mobile robot using articulated steering vehicle and lateral guiding method. J. Robot. Mechatron. 27(4), 337–345 (2015)CrossRef 4. Marshall, J., Barfoot, T., Larsson, J.: Autonomous underground tramming for center-articulated vehicles. J. Field Robot. 25(6–7), 400–421 (2008)CrossRef 5. Roberts, M.J., Duff, S.E., Corke, P., Sikka, P., Winstanley, J.G., Cunningham, J., et al.: Autonomous control of underground mining vehicles using reactive navigation. In: Proceedings IEEE International Conference on Robotics and Automation (ICRA), vol. 4, pp. 3790–3795 (2000) 6. Mkel, H.: Overview of LHD navigation without artificial beacons. Robot. Auton. Syst. 36(1), 21–35 (2001)CrossRefMATH 7. Mukhtar, A., Xia, L., Tang, B.T.: Vehicle detection techniques for collision avoidance systems: a review. IEEE Trans. Intell. Transp. Syst. 16(5), 2318–2338 (2015)CrossRef 8. Tokoro, S., Kuroda, K., Kawakubo, A., Fujita, K., Fujinami, H.: Electronically scanned millimeter-wave radar for pre-crash safety and adaptive cruise control system. In: Proceedings of IEEE Intelligent Vehicles Symposium, pp. 304–309 (2003) 9. Bohmlander, D., Yano, V., Brandmeier, T., Zimmer, A., Lee, L.L., Chi-Biu, W., Dirndorfer, T.: A novel approach for intelligent pre-crash threat assessment systems. In: Proceedings IEEE 17th International Conference on Intelligent Transportation Systems (ITSC), pp. 954–961 (2014) 10. Fukui, R., Watanabe, M., Shimosaka, M., Sato, T.: Hand shape classification with a wrist contour sensor. In: Experimental Robotics: The 13th International Symposium on Experimental Robotics, pp. 939–949 (2013) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_33 Realizing Robust Control of Autonomous Vehicles You Hong Eng¹  , Hans Andersen², Scott Drew Pendleton², Marcelo H. AngJr.² and Daniela Rus³ (1) Singapore-MIT Alliance for Research and Technology, Singapore, 138602, Singapore (2) National University of Singapore, Singapore, 119260, Singapore (3) Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA     You Hong Eng Email: youhong@smart.mit.edu Abstract We present our work on autonomous vehicles in an urban environment to provide mobility-on-demand as a solution to the first and last mile problem. The software architecture for our vehicles is reviewed with focus on new developments of speed and steering control algorithms to ensure robust performance for autonomous driving. For speed control, a brake/throttle switching controller based on velocity error and desired acceleration is implemented to achieve fast speed response without excessive switching. An iterative learning algorithm is used to train feedforward signals which are then used to compensate the repeated disturbances over a fixed route. For steering control, a revised pure pursuit steering control algorithm is designed to improve path tracking performance. The methods are validated though on-road experiments which demonstrate a speed control that is robust against changing road grade and a steering control that has smaller cross-track errors. Keywords Autonomous vehiclesIterative learning controlSwitching controlPure pursuit steering controlMobility-on-Demand 1 Introduction Mobility-on-Demand (MoD) transportation services, such as car-sharing and taxi services, have potential to alleviate the ever growing concerns of traffic congestion in urban areas by providing better accessibility to public transportation. First-and-last mile transportation to and from mass transit hubs can encourage people to more heavily utilize public transportation and incentivize reduced private vehicle ownership. However, most MoD services require much manpower through a workforce of drivers to chauffer passengers and/or rebalance the vehicle fleet, which results high operating costs and limited service availability. Autonomous vehicle (AV) fleets have been proposed this shortcoming, as well as provide other anticipated benefits such as additional safety, better road efficiency, and reduced environmental impact [10]. This work is meant to overview the common software which enables self-driving across SMART’s various AV platforms, with particular emphasis on recent development in speed and steering control algorithms. Topics including mapping and localization, moving obstacle detection, the mission booking system and route planner, local path replanner, and safe speed advisor will be discussed only in brief, with references to prior works provided for details. The core technical contributions of this work are: (1) An iterative learning speed control approach to compensate for uncertain road conditions (2) A modified pure pursuit steering controller which accounts for path waypoint orientation in addition to relative position. For each of these points, the technical approach will be highlighted, with experimental results from our autonomous vehicle presented for verification of the methods. The subsequent sections are organized as follows: Sect. 1.1 will introduce selected related works. Section 2 will review the software architecture (Sect. 2.1), the speed control with iterative learning (Sect. 2.2), and the steering controller (Sect. 2.3). Section 3 will describe the experiments and its results in detail. Section 4 will conclude this work with discussion on the experimental insights. 1.1 Related Works Typically, ILC is used to improve performance of systems that execute a single, repeated operation which could be found widely in industrial manufacturing [9], and chemical process [8]. Survey paper [2] provides an good introduction to ILC and its various applications. We have seen applications of ILC in aerial, ground and underwater vehicle. In [7], ILC was used to improve the motion of a underwater manipulator by identifying hydrodynamics parameters. A simple ILC scheme is used by Chen and Moore [3] to improve the path-following of a ground vehicle. In [12], Purwin and Andrea used ILC to iterative compute the reference trajectory so that a quadrotor could change state quickly from one to another. A recent work by Kapania and Gerdes [6] used ILC to design a reference feedforward controller to correct transient path tracking errors of an autonomous race vehicle. References relevant to the software architecture are given within the body of Sect. 2.1 as they relate to each subsystem. The steering controller tested in this work was first presented in [1], and is now evaluated on the road car platform rather than golf car, driven at higher speeds on urban roads. 2 Technical Approach 2.1 Software Architecture A common software architecture is shared across all of SMART’s AVs (operational video links provided): a Mitsubishi iMIEV road car [10] (https://​youtu.​be/​l1iYoBkCzAI, https://​youtu.​be/​bIGcG4K2ckc), several Yamaha YDRE golf cars [11] (https://​youtu.​be/​aSm027Rzj9E), and a Heartway Medical S19 mobility scooter (https://​youtu.​be/​\_​6otshNzqqo?​t=​3). Although this work only reports on experiments onboard the road car platform in later Sect. 3 for brevity, the methods discussed in the work have been applied to all platforms. The software architecture is broadly categorized into (a) perception, (b) planning, and (c) control modules [11]. Two core components of the perception module are (i) mapping & localization and (ii) moving obstacle detection. For mapping & localization, we build our own prior feature map as an occupancy grid map of vertical surfaces using pose SLAM and localize based on this map using Synthetic LIDAR, a specific sensor model, to perform Adaptive Monte-Carlo Localization. Moving object detection from 2D LIDAR is achieved through the supervised learning method of Support Vector Machine (SVM) applied to classify object clusters based on extracted spatial-temporal features. Planner mission input is given by ride requests handled through an online server, where missions take the format [Pick-up Station, Drop-off Station]; a route planner then solves for the shortest distance path over a directed graph representation of the road network to connect these two stations via a Dijkstra search. Local replanning is available for path deviations around path blockages via an RRT\* algorithm. Target speeds along the planner’s output path are decided by obstacle clearance in both lateral and longitudinal directions along the projected path such that the vehicle slows for nearby obstacles. The steering and speed control algorithms are the main focus of this work and are detailed in the subsequent sections. 2.2 Speed Control with Iterative Learning In order to get a widespread acceptance, autonomous vehicles not only have to be safe but also need to maintain high level of comfort throughout the journey. Longitudinal speed control plays an important role in achieving that. However, the speed control is inherently challenging as the longitudinal dynamics is highly nonlinear due to varying road grade, friction variation of the road surface, uncertainty in torque generated by the vehicle engine, and actuators delay. Our approach is to combine both feedback and feedforward in controlling the longitudinal speed. The classical Proportional-Integral (PI) controller is designed to generate the feedback control signal to regulate the vehicle speed. However, as a feedback controller only reacts to error, it always has a lag in transient tracking. On the other hand, feedforward signals are anticipatory and can compensate for the repeating disturbances due to the road condition in advance by learning from the previous iterations. Figure 1 shows the detailed block diagram of the longitudinal speed control system. There are two inputs that control the vehicle speed: throttle and brake. As the throttle and brake should not be applied at the same time, there is a switching logic responsible for choosing between throttle and braking loops. [] Fig. 1. ILC is implemented in parallel structure, which directly alters the throttle command, [$$\text {U}\_{\text {throttle}}$$]. A widely used switching logic is based on the comparison of required acceleration against the drag terms and closed-throttle engine torque [5]. The closed-throttle engine torque is the torque generated by the car engine when the throttle angle is set to zero and it is a function of engine speed. However, while this information may be available for the vehicle’s manufacturer, it is difficult to obtain by an end user. Here, we report the design of the switching logic based on synthetic acceleration [$$a\_{syn}$$], which is defined as: [$$\begin{aligned} a\_{syn} = a\_d + \lambda v\_{e} \end{aligned}$$] (1) where [$$a\_d$$] is the desired acceleration, [$$v\_e$$] is the speed error and [$$\lambda $$] is a constant gain. When [$$a\_{syn}$$] is greater than a throttle threshold, [$$a\_{throttle}$$], the throttle loop is chosen. When [$$a\_{syn}$$] is smaller than a brake threshold, [$$a\_{brake}$$], the brake loop is selected. Hysteresis is introduced in the region between [$$a\_{throttle}$$] and [$$a\_{brake}$$] to avoid excessively frequent switching between throttle and brake. There are two main reasons to combine the desired acceleration and speed error to create a synthetic acceleration for switching. By considering the desired acceleration, we speed up the response time of the switching. However, the desired acceleration is zero when vehicle needs to be regulated at constant speed. During such maneuvers, we must use the speed error to determine the switching. Next, we would like to design a learning controller such that tracking performance of longitudinal speed can be improved iteratively by running the autonomous vehicle on the same route multiple times. Our hypothesis is that the vehicle should be able to learn to drive better based on its past experiences. This is in contrast to the typical method where vehicle dynamics is identified explicitly via system identification [4] which requires substantial amount of effort and time. Conceptually, iterative learning control (ILC) provides a suitable framework to design such a learning controller because the route connecting any two stations is predetermined, and hence the unknown disturbances, such as road grade and road friction, tend to be repeatable with respect to the physical location of the route. The learning controller will output feedforward signals, which directly alter the throttle command from the feedback controller, to improve the tracking performance by incorporating error information from the previous trials into the control for subsequent iterations. In this paper, we adopted the notation of a widely used ILC learning algorithm [2], which is formulated as: [$$\begin{aligned} \mathbf u \_{j+1} = \mathbf Q (q)[\mathbf u \_j+\mathbf L (q)\mathbf e \_j] \end{aligned}$$] (2) where [$$\mathbf u \_j \in \mathfrak {R}^{N \times 1}$$] is the feedforward signal at j iteration, [$$\mathbf Q (q) \in \mathfrak {R}^{N \times N}$$] is the Q-filter, [$$\mathbf L (q)\in \mathfrak {R}^{N \times N}$$] is the learning function, [$$\mathbf e \_j \in \mathfrak {R}^{N \times 1} $$] is the speed error at j iteration and N is the maximum path index. As each increment of path index equivalent to 0.1 m, the total path length is 0.1 Nm. Do note that the [$$\mathbf u \_j$$] and [$$\mathbf e \_j$$] are vectors of N elements indexed by path index. For the first iteration, we initialize [$$\mathbf e \_{j}$$] and [$$\mathbf u \_{j}$$] as zero vectors and store them in Memory 1 and Memory 2 respectively as shown in Fig. 1. At the end of each iteration, [$$\mathbf e \_{j}$$] is filtered through L(q) and added to the previous feedforward [$$\mathbf u \_{j}$$], and filtered through filter Q to become [$$u\_{j+1}$$]. The updated feedforward signal is applied to the vehicle in the next iteration. There are numerous ILC algorithms and design techniques to choose from [2]. In this research, the P-type learning function is used to design the [$$\mathbf L (q)$$] filter because it is a tunable design and can be applied without extensive modeling and analysis of the plant. It has the form of constant gain [$$k\_p$$] multiply by identity matrix of size N, [$$\begin{aligned} \mathbf L (q) = k\_p \cdot \mathbf I \_{N \times N}. \end{aligned}$$] (3) As in many ILC algorithms, we set [$$\mathbf Q (q)$$] to an identity matrix and thus do not include Q-filtering. This is required for perfect tracking according to the Theorem 3 discussed in [2]. In practice, the Q-filter is usually a low-pass filter and hence can be used to disable learning at high frequencies. We do not need to include the Q-filter because both the desired speed and speed measurement are already filtered to remove high frequency noise before entering the ILC algorithm. 2.3 Steering Control There are various methods that have been presented in the literature. Two of the most popular types are geometric methods, and model based methods. Geometric path tracking algorithms use simple geometric relations to come up with steering control laws. Compared to model based path tracking algorithms, geometric path tracking algorithms are relatively simpler to implement, more robust to path curvatures, and tend to work better for lower speed driving. One of the most popular geometric path tracking algorithms is the pure pursuit path tracking algorithm. The algorithm is relatively simple and easy to implement, and is robust to disturbances and large lateral error. The Stanley method is another popular geometric steering controller. Compared to the pure pursuit method, the Stanley method, has better tracking results at higher speed. However, the Stanley method is not as robust to disturbances, and as it requires continuous curvature path rather than way points, bit is susceptible to discretization related problems [13]. In our previous work [1], we described an alternative formulation to the pure pursuit path tracking algorithm, and implemented it on an autonomous golf car operating on a pedestrian environment. In this work, we evalute the effectiveness of the proposed algorithm for driving autonomously on urban roads. Driving on roads poses different challenges compared to driving on a pedestrian environment, as the vehicle operates at higher velocity, but pursuing paths with gentler turns and longer straight segments. The input path to the controller is a series of ordered waypoints [$$[(x\_p(1),y\_p(1)),(x\_p(2),y\_p(2)),\cdots ,(x\_p(N),y\_p(N))]$$]. The approximate heading of way point i can be computed as [$$\begin{aligned} \theta \_p(i)=tan^{-1}\left( \frac{y\_p(i+1)-y\_p(i)}{x\_p(i+1)-x\_p(i)}\right) \end{aligned}$$] (4) [] Fig. 2. Tangential heading of pure pursuit circular arc [] Fig. 3. Offset distance and lookahead distance of the pursued path with desired circular arc heading Referring to Fig. 2, with the original pure pursuit algorithm, the tangential heading [$$\gamma $$] of the circular arc segment at the tracked way point [$$(x\_p(i),y\_p(i))$$] [$$\begin{aligned} \gamma =\theta +2\eta \end{aligned}$$] (5) In order to pursue the correct heading of way point i ([$$\theta \_p(i)$$]) from the current position (x, y) and heading [$$(\theta )$$], the corresponding lookahead angle [$$\eta \_d$$] can be computed by substituting [$$\gamma =\theta \_p(i)$$], and [$$\eta =\eta \_d$$] to be: [$$\begin{aligned} \eta \_d=\frac{ \theta \_p(i) -\theta }{2} \end{aligned}$$] (6) In order to calculate the corresponding steering angle, the tracked way point is now translated by a distance d perpendicular to the path segment from [$$(x\_p(i),y\_p(i))$$]. The offset distance d can take either positive or negative value. Referring to Fig. 3, the offset distance d and the new lookahead distance [$$L\_{fwd}$$] can be computed as [] (7) [] Fig. 4. Offset distance d as expected cross track error for constant curvature path [] Fig. 5. Offset distance d compensation for pure pursuit path tracking with desired path heading considered Referring to Fig. 4, the distance d would be the expected current cross track error if it were pursuing a constant curvature path. Referring to Fig. 5 in order to correct for this error, a modified tracked way point can be used by inverting the expected cross track error. The point is translated from the original tracked way point by a distance of [$$-d$$] away in the direction [$$(\theta +\dfrac{\pi }{2})$$]. [$$d= {\left\{ \begin{array}{ll} sgn(d)\ D ,&{} \text {if } |d| \ge D\\ d, &{} \text {otherwise} \end{array}\right. } $$] A saturation constant D can be introduced to limit the maximum offset that is tolerated by the algorithm. This constant has to be set at a reasonable amount in order to avoid instability that can be caused by huge initial cross track and/or heading error, or otherwise a bad representation of the continuous path. The modified tracked way point can then be written as: [] (8) and the corresponding steering angle can be computed using the standard pure pursuit algorithm tracked to this point. 3 Experimental Result The autonomous driving experiment is conducted in National University of Singapore’s University Town with SMART’s converted iMIEV. Figure 6 presents the previously mapped area. The clockwise path was created by manually driving the car and recording the localization poses. The poses are fitted with Bézier curves. The path is generated by discretizing the Bézier curves with 10 cm resolution. The total length of the path is 1971.8 m, and its maximum curvature is 0.125 m[$$^{-1}$$]. [] Fig. 6. 3D view of the testing track at University town at National University of Singapore, Singapore. The plot on the right highlights the change of altitude versus distance travel along path connecting Stephen Riady and CREATE. The path is parameterized by path index, where each unit increment of the index corresponds to 0.1 m in the physical world. 3.1 Speed Control Figure 6 shows the path (bottom left, highlighted by red dash) connecting two drop off points in the University Town of the National University of Singapore. The path is about 500 m and it is quite hilly with changing road grade. Figure 7 (trial 1) shows the desired speed and the resulting speed response when a well tuned PI controller is used to regulate the speed. The PI controller could regulate the speed well in the flat road segment (path index: 1–600, 2000–2800) but performed poorly along segments where there is rapid change of road grade (path index: 800–1200, 2900–3500, 4500–4800). In order to improve transient performance, ILC is applied to generate feedforward signals (see Fig. 8 for the evolution over the 6 trials) that track the desired speed and reject repeating disturbances. After 5 iterations, at trial 6, the root-mean-square (RMS) error of the speed reduces from 0.6 m/s to 0.28 m/s, showing a good tracking performance even at the areas of rapidly changing road grade. [] Fig. 7. Evolution of speed control performance for six iterations with learning gain [$$k\_p = 1/57$$]. There is no feedforward signal in the trial 1. At the end of each iteration, the error is filtered through learning function, added to the previous feedforward signals to create a new feedforward signals for the next iteration. Root-mean-square (RMS) error of the speed reduces from 0.6 m/s to 0.28 m/s. [] Fig. 8. Evolution of feedforward signal over six iterations. For trial 1, the feedforward signal is zero. After trial 1, the speed error is used as a input to calculate the feedforward signal for the next trial, and the process repeats until the feedforward signals converges or the reduction of speed error is sufficiently small. In general, information gain through ILC training for a specific speed profile is not transferable to other desired speed profiles. It is also generally assume that the initial condition of the plant must be the same for each iteration [2]. However, we show satisfactory performance in our experiments conducted at different speed profiles with varied maximum desired speed and initial condition. Figure 9 shows the experiment results for target speeds 4 m/s and 6 m/s at the left and right plot respectively. The result shows that the same feedforward signals are also effective at 4 m/s and 6 m/s even though the ILC is trained for 5 m/s. There is no sign of speed reduction when the vehicle climbs the hill starting at path index 3000. Furthermore, there are small deviation in the starting location of the experiments. The performance is robust again such change in initial condition. Figure 10(a) shows the throttle and brake command applied during trial 6. First, there was not excessive switching between the throttle and brake. The time when brake was activated is highlighted by the green vertical line. Second, the plot provides reason on why there was not significant drop of speed when vehicle was climbing hill at path index 3000. The throttle command was ramped up from 12.5% at path index 2800 to 38% at path index 2950 to prepare for the hill climbing. This predictive behaviour is similar to how a human driver would increase the throttle command preemptively just before hill climbing. Figure 10(b) illustrates the switching logic using a phase plane plot of desired acceleration and speed error. Two solid lines divide the plot into three regions: throttle, neutral and brake. Hysteresis is effective in the neutral region to prevent excessive switching. If the trajectory transitions into the neutral region from brake region, it will remain in brake mode until it steps into the throttle region, and vice versa. [] Fig. 9. Experiments at different speed profiles. Although the ILC is trained for trapezoidal speed profile of 5 m/s, it is shown to be effective also for speed profile of 4 m/s and 6 m/s respectively. [] Fig. 10. (a) Speed response at trial 6 and the corresponding throttle/brake command. Green vertical lines highlights the region when brake was applied. (b) Phase plot of desired acceleration [$$a\_d$$] and speed error [$$v\_e$$]. The figure illustrate how switching of throttle and brake are determined based on the synthetic acceleration [$$a\_{syn} = a\_d + 0.3 v\_e .$$] with [$$a\_{throttle} = 0$$] and [$$a\_{brake} = -0.3$$]. 3.2 Steering Control The cross track error is defined as the minimum distance between the anchor point to the path. The RMS cross track error for autonomous driving with the original pure pursuit algorithm is 0.1895 m, while the error for autonomous driving with the modified pure pursuit algorithm with the offset tolerance [$$D = 0.1$$] m, [$$D= 0.3$$] m and [$$D= 0.5$$] m are 0.1778 m, 0.1635 m and 0.1700 m respectively. Showing 6.2%, 13.7%, and 10.3% improvements reduction in RMS cross track error. In our previous experiment, we drove a golf buggy autonomously in a pedestrian environment, pursuing a pre-programmed path with more intricate curvatures. We achieved reduction in RMS cross track error of up to 46%. Figure 11 plots the cross track error histogram and Fig. 12 plots the cross track error on different sections of the path as color intensity for various values of D. The histogram of the error (Fig. 11) shows that there is a shift in the distribution of the cross track error. The modified pure pursuit algorithm narrows the spread of the distribution, and reduces the maximum cross track error magnitude. [] Fig. 11. Cross track error histogram for experiment with [$$L\_{min}=4.0$$] [] Fig. 12. Cross track error intensity plot for experiment with [$$L\_{min}=4.0$$] Referring to Fig. 12, it can be concluded that the original pure pursuit algorithm performs very well on long straight sections of the road, however the corner cutting effects are apparent as the curvature of the tracked path increases. The effectiveness of the modified pure pursuit algorithm is apparent when overcoming the corner cutting problem in the original pure pursuit algorithm. The modified pure pursuit algorithm achieves similar performance to the original pure pursuit algorithm along the straight line sections, and reduces the cross track error magnitude significantly when it is tracking path sections with large curvature. And therefore, the overall reductions in RMS cross track error when pursuing paths with gentle curves and long straight sections is not as significant as when pursuing intricate paths with bigger curvatures. While keeping the minimum lookahead constant, increasing the offset tolerance D decreases the RMS error. This demonstrates that on some sections of the path where the offset d is large, compensating the way point closer to the original value of d is beneficial. However, increasing D too much may result in oscillatory behavior thus increased error when pursuing straight line segments along the path, and therefore appropriate value has to be assigned to D. 4 Conclusion The complete software architecture for SMART’s AVs is reviewed in brief, with detailed focus on new developments in control algorithms. By introducing feedforward signals trained by ILC into the feedback controller, the RMS error of speed control was reduced significantly only after a few iterations. The resultant controller is robust against the changes in initial condition and speed profile. For steering control, we describe a new formulation for the pure pursuit geometric path tracking algorithm which incorporates the orientation information to compute the steering input. Experimental results have shown that the proposed method is effective in overcoming the corner cutting problem. Acknowledgment This research was supported by the Future Urban Mobility project of the Singapore-MIT Alliance for Research and Technology (SMART) Center, with funding from Singapore’s National Research Foundation (NRF). References 1. Andersen, H., Chong, Z.J., Eng, Y.H., Pendleton, S., Ang Jr., M.H.: Geometric path tracking algorithm for autonomous driving in pedestrian environment. In: IEEE International Conference on Advanced Intelligent Mechatronics (2016) 2. Bristow, D.A., Tharayil, M., Alleyne, A.G.: A survey of iterative learning control. IEEE Control Syst. 26(3), 96–114 (2006)CrossRef 3. Chen, Y.Q., Moore, K.L.: A practical iterative learning path-following control of an omni-directional vehicle. Asian J. Control 4(1), 90–98 (2002)CrossRef 4. Dias, J.E.A., Pereira, G.A.S., Palhares, R.M.: Longitudinal model identification and velocity control of an autonomous car. IEEE Trans. Intell. Transp. Syst. 16(2), 776–786 (2015) 5. Hedrick, K., et al.: Brake system modeling, control and integrated brake/throttle switching phase I. California Partners for Advanced Transit and Highways (PATH) (1997) 6. Kapania, N.R., Gerdes, J.C.: Path tracking of highly dynamic autonomous vehicle trajectories via iterative learning control. In: 2015 American Control Conference (ACC), pp. 2753–2758. IEEE (2015) 7. Kawamura, S., Sakagami, N.: Analysis on dynamics of underwater robot manipulators based on iterative learning control and time-scale transformation. In: 2002 IEEE Proceedings of the International Conference on Robotics and Automation, ICRA 2002, vol. 2, pp. 1088–1094. IEEE (2002) 8. Mezghani, M., Roux, G., Cabassud, M., Le Lann, M.-V., Dahhou, B., Casamatta, G.: Application of iterative learning control to an exothermic semibatch chemical reactor. IEEE Trans. Control Syst. Technol. 10(6), 822–834 (2002)CrossRef 9. Norrlof, M.: An adaptive iterative learning control algorithm with experiments on an industrial robot. IEEE Trans. Robot. Autom. 18(2), 245–251 (2002)CrossRef 10. Pendleton, S., Chong, Z.J., Qin, B., Liu, W., Uthaicharoenpong, T., Shen, X., Fu, G.M.J., Scarnecchia, M., Kim, S.-W., Ang, M.H., Frazzoli, E.: Multi-class driverless vehicle cooperation for mobility-on-demand. In: Intelligent Transportation Systems World Congress (ITSWC) (2014) 11. Pendleton, S., Uthaicharoenpong, T., Chong, Z., Fu, G., Qin, B., Liu, W., Shen, X., Weng, Z., Kamin, C., Ang, M., et al.: Autonomous golf cars for public trial of mobility-on-demand service. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1164–1171. IEEE (2015) 12. Purwin, O., D’Andrea, R.: Performing and extending aggressive maneuvers using iterative learning control. Robot. Auton. Syst. 59(1), 1–11 (2011)CrossRef 13. Snider, J.M.: Automatic Steering Methods for Autonomous Automobile Path Tracking. Technical report (2009) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_34 Learning to Plan for Visibility in Navigation of Unknown Environments Charles Richter¹   and Nicholas Roy¹   (1) Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, Massachusetts 02139, USA     Charles Richter (Corresponding author) Email: car@mit.edu   Nicholas Roy Email: nickroy@mit.edu Abstract For robots navigating in unknown environments, naïvely following the shortest path toward the goal often leads to poor visibility of free space, limiting navigation speed, or even preventing forward progress altogether. In this work, we train a guidance function to give the robot greater visibility into unknown parts of the environment. Unlike exploration techniques that aim to observe as much map as possible for its own sake, we reason about the value of future observations directly in terms of expected cost-to-goal. We show significant improvements in navigation speed and success rate for narrow field-of-view sensors such as popular RGBD cameras. However, contrary to our expectations, we show that our strategy makes little difference for sensors with fields-of-view greater than 80[$$^{\circ }$$], and we discuss why the naïve strategy is hard to beat. 1 Introduction Robot navigation in unknown environments is often performed with a receding-horizon approach, where a motion planner selects a trajectory of some length that makes progress toward the goal while avoiding observed obstacles. Progress is often measured by a heuristic estimate of the remaining cost-to-goal from the end of the planned action. This approach can be represented mathematically with the following optimization problem, which is repeatedly re-solved online to select the optimal action [$$a\_t^\*$$] as the robot progresses through the environment: [$$\begin{aligned} a\_t^\* = \underset{a\_t \in A}{\text {argmin}} \,\,\, j(b\_t,a\_t) + h(b\_t,a\_t)\text {,}\,\,\,\text {s.t.}\,\,\, g(b\_t,a\_t) = 0. \end{aligned}$$] (1) Here, [$$a\_t$$] is an action to be executed at time t, chosen from a set A of possible actions. The robot’s belief [$$b\_t$$] contains its knowledge of its own configuration at time t and a partial map built from sensor measurements up to time t. The total estimated cost of choosing action [$$a\_t$$], given [$$b\_t$$], is the sum of the action cost [$$j(b\_t,a\_t)$$] and the heuristic estimate [$$h(b\_t,a\_t)$$] of the cost remaining beyond the end of [$$a\_t$$]. In this paper, we focus on the minimum-time navigation problem, so [$$j(b\_t,a\_t)$$] returns the time duration of [$$a\_t$$] and [$$h(b\_t,a\_t)$$] estimates the remaining time to reach the goal after action [$$a\_t$$] has been completed. Finally, to guarantee safety, we use a collision constraint [$$g(b\_t,a\_t)$$], which returns 1 if action [$$a\_t$$] intersects either obstacles or unknown regions in the belief, or leads to a state for which collision or entering unknown regions would be inevitable [4], and returns 0 otherwise. We enforce this constraint by ensuring the existence of an “emergency-stop” trajectory that could bring the robot to a stop from the end of action [$$a\_t$$] without intersecting an obstacle or unknown region of the map. This collision constraint creates a crucial interaction between the feasibility of actions and the parts of the map that have been observed thus far. The robot is constrained to select trajectories that lie entirely within the known free space of the map, at speeds that ensure it can stop before hitting an obstacle or entering the unknown. In an unknown map, it is very difficult or impossible to accurately estimate the remaining cost-to-goal—as we will discuss in Sect. 2, doing so would amount to solving a difficult POMDP—so the heuristic [$$h(b\_t,a\_t)$$] is inherently a simplified approximation. Common approximations are often based on following the apparent shortest path to the goal at a fixed speed. In this work, we define a shortest-path cost-to-goal estimator [$$d(b\_t,a\_t)$$], which computes the time needed to reach the goal from the end of [$$a\_t$$], respecting the obstacles in belief [$$b\_t$$], assuming holonomic kinematics, and assuming that speed is held constant at the initial speed of [$$a\_t$$]. We implement [$$d(b\_t,a\_t)$$] using Dijkstra’s algorithm on a 2D graph of nodes connected to their 16 nearest neighbors. Using this shortest-path function as the heuristic amounts to setting [$$h(b\_t,a\_t) = d(b\_t,a\_t)$$] in Eq. (1). To demonstrate the problems with this approach, Fig. 1 illustrates a sequence of trajectories planned by a simulated car approaching a blind corner using the shortest-path heuristic. This heuristic greedily guides the robot toward the inside of the corner, where it is unable to see the free space ahead. Without visibility around the corner, the local planner is forced to select an action that slows down from 4 m/s to 1 m/s to preserve sufficient stopping distance. [] Fig. 1. Navigating with a simple shortest-path heuristic, the robot greedily keeps to the inside of the turn since it is nearer to the goal. This position occludes the free space around the corner, forcing the robot to slow down to maintain a safe stopping distance. Chosen actions [$$a\_t$$] are drawn in blue, emergency-stop actions (lengths proportional to the square of speed) are green, and both are constrained by [$$g(b\_t,a\_t)$$] to lie within known free space. Black lines indicate the field-of-view. This common heuristic ignores the fact that the constraints imposed by the current unknown regions of the map may actually be lifted when future observations are taken and more free space is added to the map. A more accurate heuristic might have guided the robot along a slightly longer path, sacrificing some immediate progress in exchange for improved future visibility into the unknown regions of the map. Greater visibility might, in turn, enable faster actions in future planning steps resulting in an overall reduction in time-to-goal. Unfortunately, reasoning explicitly about this observation-action interaction typically implies the daunting complexity of POMDPs. To avoid this complexity while retaining information-gathering behaviors, our approach is to augment the shortest-path heuristic with a learned function that models the change in cost-to-goal resulting from the next observation and subsequent action. Next, we will derive this learned model and use it to efficiently reason about the observation-action interactions that are ignored by common shortest-path heuristics. 2 Problem and Technical Approach The heuristic [$$h(b\_t,a\_t)$$] approximates the expected remaining cost-to-goal after taking action [$$a\_t$$], given belief [$$b\_t$$]. However, the true optimal value of this quantity represents the solution to a POMDP in which the partially observable map is modeled as a random variable drawn from a distribution over environments [9]. Not only would this POMDP be intractable to solve, but it would also require us to model the distribution over maps, which, for realistic environments, would be an extremely challenging problem unto itself. Without knowing this map distribution, we cannot accurately estimate the probability of future sensor measurements in order to search forward through action-observation sequences. Rather than employing POMDP solution techniques, our approach is instead to use machine learning to somehow approximate [$$h(b\_t,a\_t)$$] from training examples. While we could consider attempting to learn the entire global heuristic from data, this is not likely to be a predictable quantity due to the wide range of possible map geometries and sizes. Instead, we observe that [$$d(b\_t,a\_t)$$] gives reasonable guidance much of the time, but is missing specific information about the effects of possible map observations in the immediate future, which is more local and hence predictable than the entire heuristic. Therefore, we will model the effects of possible future observations in the form of a correction to the shortest-path heuristic. Let [$$h^\*(b\_t,a\_t)$$] represent the expected cost-to-goal under the optimal policy¹. Our learned function [$$f\_h(b\_t,a\_t)$$] is intended to model the difference between the shortest path heuristic and this true optimal cost-to-goal: [$$\begin{aligned} f\_h(b\_t,a\_t) \approx h^\*(b\_t,a\_t) - d(b\_t,a\_t). \end{aligned}$$] (2) By capturing this effect from training data, we avoid the need to explicitly model the environment distribution and search over the vast space of observations. And, while we cannot compute [$$h^\*(b\_t,a\_t)$$], even offline during training, we can locally approximate it using a one-step look ahead technique, which we describe next. [] Fig. 2. (a) Training map, with blue region from which training configurations are sampled. (b)–(d) Training sequence: (b) Random robot configuration is sampled, with belief [$$b\_0$$] and feasible candidate action [$$a\_0$$]; (c) the sensor is simulated from the end of [$$a\_0$$], yielding belief [$$b\_1$$] containing additional observed free space; (d) next action [$$a\_1$$] is selected to minimize cost, given the newly observed free space in belief [$$b\_1$$]. The choice of [$$a\_1$$] will be used to compute the label of this data point. 2.1 Training We train the model [$$f\_h(b\_t,a\_t)$$] using a dataset of labeled belief-action pairs, which are intended to represent realistic scenarios the robot could encounter. We generate each data point in simulation by first randomly sampling a robot configuration within a training map. Figure 2a shows a training map we use for hallway environments. Since all corners in our hallway environments share the same geometry (for a given hallway width), it is sufficient to train on a map with a single corner. To restrict ourselves to realistic configurations the robot might encounter while approaching a turn, we sample uniformly from the blue rectangular region, with random headings in the range [$$[-45^{\circ }$$], [$$45^{\circ }]$$] with respect to the hallway direction, and with random speed. We then generate a belief [$$b\_0$$] from the point of view of the sampled robot configuration. To generate a realistic belief in general, it would be necessary to aggregate a history of measurements, which in turn would require us to sample a realistic history of states for each data point. To avoid this complication, we generate the sampled belief by simulating a single sensor measurement with a 360[$$^{\circ }$$] field of view from the sampled configuration. This strategy reveals the local map around the sampled configuration while still capturing the effects of occlusions due to walls, reasonably approximating beliefs encountered at runtime. Having sampled a configuration and generated the initial belief, we then randomly select a feasible action [$$a\_0$$] the robot could execute, subject to the constraint [$$g(b\_0,a\_0) = 0$$], which states that the action is collision-free and allows the robot room to stop within known free space (Fig. 2b). Next, we simulate the sensor, with the actual field-of-view, from the end of [$$a\_0$$] and incorporate the measurement into [$$b\_0$$] to form a new updated belief, [$$b\_1$$] (Fig. 2c). Then, we select the next action, [$$a\_1$$], from the end of [$$a\_0$$], that makes the most progress toward the goal given the updated belief [$$b\_1$$] and subject to the constraint [$$g(b\_1,a\_1) = 0$$] (Fig. 2d). Progress is measured with a calculation of [$$d(b\_1,a\_1)$$], from the end of [$$a\_1$$]. When computing [$$d(b\_1,a\_1)$$], we assume that the speed of the robot returns from the terminal speed of [$$a\_1$$] to the initial speed of [$$a\_0$$] according to the acceleration limits of the robot, and then maintains that speed to the goal. The training label is then computed using the optimal choice for [$$a\_1$$]: [$$\begin{aligned} y(b\_0,a\_0) = \underset{a\_1\in A}{\text {min}}\left[ j(b\_1,a\_1) + d(b\_1,a\_1)\right] - d(b\_0,a\_0). \end{aligned}$$] (3) Thus, while we cannot compute [$$h^\*(b\_t,a\_t)$$] even offline during training, our training labels still approximate the desired result of capturing the deviation from the shortest-path heuristic that is due to potential future observations, and subsequent actions the planner could take, contingent upon those observations. Finally, we compute a low-dimensional set of features, [$$\phi (b\_t,a\_t)$$] for each belief-action pair that capture the useful predictive information. For this work, we use two features based on the boundary between known free space and the unknown, which we refer to as the “frontier”: (1) the fraction of the map frontier in [$$b\_t$$] that will be visible from the robot configurations along action [$$a\_t$$], and (2) the average distance from states along the action [$$a\_t$$] to all frontier locations in the map. 2.2 Prediction and Planning with the Learned Model Our dataset of N data points has the form [$$D = \{(\phi \_1,y\_1),$$] [$$\dots ,$$] [$$(\phi \_N,y\_N)\}$$] where indices in this case represent different data points (not time t) and we write [$$\phi \_i$$] as shorthand for [$$\phi (b,a)$$] for the [$$i^{th}$$] data point. We use the Nadaraya-Watson estimator to make predictions at runtime: [$$\begin{aligned} f\_h(b\_t,a\_t) = \frac{\sum \_{i=1}^N K(\phi (b\_t,a\_t)-\phi \_i)y\_i}{\sum \_{i=1}^N K(\phi (b\_t,a\_t)-\phi \_i)}. \end{aligned}$$] (4) Our proposed heuristic function is then the sum of the shortest-distance heuristic and the learned function: [$$h(b\_t,a\_t) = d(b\_t,a\_t) + f\_h(b\_t,a\_t)$$]. Putting it all together, our planner repeatedly re-solves the following optimization problem: [$$\begin{aligned} a\_t^\* = \underset{a\_t \in A}{\text {argmin}} \,\,\,j(b\_t,a\_t) + d(b\_t,a\_t) + f\_h(b\_t,a\_t)\text {,}\,\,\,\text {s.t.}\,\,\, g(b\_t,a\_t) = 0. \end{aligned}$$] (5) To measure the performance of our approach, we will compare it to a “baseline” planner, which uses a simpler control law that is identical to (5), except that it omits [$$f\_h(b\_t,a\_t)$$] and instead uses the simpler heuristic [$$h(b\_t,a\_t) = d(b\_t,a\_t)$$]. 3 Results To evaluate our method, we trained a separate model for each combination of sensor field-of-view (from 45[$$^{\circ }$$] to 120[$$^{\circ }$$]) and hallway width (2 m, 2.5 m and 3 m). For each of these combinations, we conducted simulation trials in 50 randomly-generated hallway environments [10], recording the performance of our planner as well as that of a “baseline” planner, described above. We selected the hallway environment type because hallways are very common and also challenging for high-speed navigation due to the visual occlusion at turns. We selected the range of sensor fields-of-view to be relevant for widely used RGBD sensors such as the Microsoft Kinect and Asus Xtion, which produce point clouds of 57[$$^{\circ }$$] and 58[$$^{\circ }$$] horizontal field-of-view, respectively, as well as standard and wide-angle cameras. We use a re-planning frequency of 20 hz, a planning horizon of 1.5 m for the pre-computed set A of actions, and a combination of feed-forward and feedback control to execute the planned motions. We use a 2D occupancy grid map representation, which can represent free, occupied and unknown cells, and is initialized with every cell unknown. To build maps, we simply project range measurements into the map using raycasting from the vehicle configuration, which we assume is known exactly. We do not use probabilistic or pose-graph optimizing SLAM. In simulation, we used a 10 cm map resolution, which is sufficient to capture the perfectly rectangular shapes of the simulated hallways. 3.1 Learned Models Figure 3 shows examples of learned models for 60[$$^{\circ }$$] field-of-view in a 2 m wide hallway and 120[$$^{\circ }$$] field-of-view in a 3 m wide hallway. Both models illustrate the same overall trend. Intuitively, the highest cost belief-action pairs occur when actions are located very near to the map frontier while simultaneously offering low frontier visibility (lower left corners of plots in Fig. 3). This scenario occurs when the robot approaches a corner hugging the inside wall of the hallway, which enables it to get close to the frontier while keeping the frontier mostly occluded by the wall. In fact, this costly behavior is characteristic of the baseline planner. [] Fig. 3. Learned model [$$f\_h(b\_t,a\_t)$$] for (a) 60[$$^{\circ }$$] field-of-view in 2 m wide hallway and (b) 120[$$^{\circ }$$] field-of-view in 3 m wide hallway. Our models predict a reduction in cost if we select actions that either increase distance from the frontier or visibility of the frontier, or both. When the robot is approaching a blind corner, these two features both suggest approaching the corner for a wide turn, not only to increase available stopping distance between the robot position and the frontier, but also to take an observation of more occluded space, thereby pushing the frontier farther ahead. The only observable deviation from this trend is that in the 60[$$^{\circ }$$] field-of-view model, the predicted cost begins to increase slightly as the fraction of visible frontier rises above 50%. The reason for this rise is that for narrow field-of-view sensors, the robot must drive directly toward the frontier to observe a large fraction of it, which in hallway environments is correlated with shorter stopping distances and therefore slower speeds. We believe that a more descriptive set of features would produce a more accurate model, but this one nevertheless captures the intuitive notion that keeping greater distance from the frontier and observing some portion of the frontier are both advantageous. 3.2 Simulation Success Rates We observed the surprising result that for narrow fields-of-view, the baseline planner often failed to reach the goal altogether. These failures resulted from the planner becoming inadvertently trapped in a small region of known free space, such that no feasible actions would yield views of additional free space. The robot was therefore forced to stop. In these cases, a 3-point turn would be required to reorient the robot, take observations revealing free space toward the goal, and proceed. However, we consider these events to be failures. This scenario is pictured in Fig. 4a, where the gray lines represent all actions in action set A. Nearly all of them would cause the robot’s bounding box (not shown for these actions) to intersect a wall or an unknown map cell and are therefore infeasible. The exceptions are highlighted in blue and red. For these actions, we then determine whether there exists a safe stopping action, illustrated in green, with the bounding box of the robot at the end of the emergency-stop action illustrated as a black rectangle. The red actions making progress toward the goal do not have safe stopping actions because the bounding box of the robot would intersect both a wall and unknown space. The only feasible choices, with stopping actions lying entirely within known free space, are illustrated in blue. However, these feasible actions neither make progress toward the goal, nor afford any useful sensor viewpoints. Hence the planner has become trapped. [] Fig. 4. (a) Baseline planner trapped, steering away from the goal, due to insufficient observed free space to the right. The only feasible actions, that are both collision-free and have a feasible emergency-stop action, are blue. See main text for discussion. (b) Success rates in reaching the goal as a function of hallway width and field-of-view, for our method and the baseline planner. While our planner was also susceptible to this failure mode, our learned guidance function was very effective at avoiding it for certain hallway widths and fields-of-view. Figure 4b shows the success rate for both planners, indicating that our planner was able to reliably reach the goal (for a given hallway width) using a field-of-view approximately 5–10[$$^{\circ }$$] narrower than what would be required for the baseline planner. While these failures depend on the relative size of the hallway and the length of actions, using a planning horizon that is too short can lead to poor receding-horizon planning behavior in other ways, since short actions cannot span the robot’s true range of turning maneuvers. Therefore, we show this dependency by varying the width of the hallway instead. These results show that the baseline planner is likely to fail under some fairly benign and common conditions. For example, using a Microsoft Kinect sensor on an RC car robot with a 0.8 m turning radius in a 2 m wide hallway could be expected to succeed only 20% of the time, whereas our method succeeds 100% of the time under these conditions. 3.3 Simulation Time-to-Goal and Speed as a Function of Visibility Figure 5a illustrates the time-to-goal for our planner, normalized by the time-to-goal for the baseline planner in the same environments. For a given hallway width and field-of-view, we averaged this normalized time across all trials in which both planners succeeded. By augmenting the shortest path heuristic with a learned model, we observed times-to-goal as low as 70% of the baseline value in 2 m wide and 2.5 m wide hallways, for narrow fields-of-view. We observe smaller, but still substantial improvement in time-to-goal for the 3 m wide hallway and for wider fields of view. It is intuitively sensible that the maximum improvement should occur in cases where the observable free space is most limited. [] Fig. 5. Simulation results: (a) average time-to-goal (lower is better), (b) average speed (higher is better), and (c) average maximum visible range. Averaged results are normalized by the corresponding performance of the baseline planner in the same environments. Plot color indicates hallway width. Figure 5b illustrates the average (normalized) navigation speeds corresponding to the time-to-goal results. In each hallway width, our solution is up to 1.4 times faster than the baseline planner. However, since our planner typically travels a slightly longer distance in order to project its sensor visibility around corners, the increased speed does not necessarily translate to shorter times-to-goal, as is the case for 120[$$^{\circ }$$] field-of-view. Figure 5c quantifies the greater environment visibility achieved by our method. We recorded the maximum visible range in each simulated sensor measurement during simulation trials of our planner and compared those with the maximum visible ranges observed by the baseline planner. In every case, our method was able to view a greater distance ahead, resulting in the ability to plan higher-speed actions. [] Fig. 6. (a)–(c) Example planning sequence using our method, which guides the robot wide around the corner, gaining visibility and maintaining a higher speed. (d)–(e) Simulated trajectory and resulting observed map of the baseline planner and our method, in a 2.5 m hallway with a 60[$$^{\circ }$$] sensor field-of-view. Green and red dots indicate start and goal, respectively. Dark gray indicates the hidden map, and light gray regions are those that have not been observed. Figures 6a–c show that our method gives rise to the intuitive behavior of swinging wide around corners to obtain more advantageous views of free space enabling greater stopping distances. Compared to the default baseline behavior pictured in Fig. 1, this behavior is qualitatively different and faster. Figures 6d and e show the trajectories and resulting maps produced by the baseline planner compared to our method in a 2.5 m wide hallway with a 60[$$^{\circ }$$] field-of-view. Our planner approaches each corner from a wider angle than the baseline planner, observing more of the environment while reaching the goal in less time. 3.4 Experimental Demonstration on Autonomous RC Car We tested our planner in a hallway environment in the MIT Stata Center on the autonomous RC car pictured in Fig. 1. The car is equipped with a Hokuyo UTM-30LX LIDAR, a Microstrain 3DM-GX3-25 IMU and a Gigabyte Brix Intel dual-core i7 computer with 16 GB RAM. To perform state estimation, we fuse LIDAR odometry [2] with IMU measurements in an extended Kalman filter. For these experiments, we used a 5 cm map resolution to capture the smaller irregularities of the real environment. We artificially limited the Hokuyo planar LIDAR field-of-view for the purposes of map building (but not for state estimation). We picked a section of hallway consisting of three 90[$$^{\circ }$$] turns, with hallway widths of approximately 1.9 m, roughly matching our simulated hallway distribution. In this environment, we conducted 10 trials each for the baseline planner and our planner, using both 60[$$^{\circ }$$] and 90[$$^{\circ }$$] fields-of-view. Figure 7 illustrates our experimental map and autonomous RC car, and Table 1 summarizes the experimental results. In Table 1, the time-to-goal column averages across only the successful trials in each category, while the mean and maximum speed columns represent averages over all trials in order to quantify the speed differential for the 60[$$^{\circ }$$] case where the baseline planner did not reach the goal in any trials. [] Fig. 7. (a) Experimental trajectory of our planner through a hallway in the MIT Stata Center with numbered turns (inset) and expanded view of our planner’s trajectory through turn #3 (main image), exhibiting good visibility around the corner as a result of its wide-turn behavior. (b) Baseline planner failed at turn #1. (c) Our autonomous RC car in turn #2 of the experimental environment. Figure 7a shows a map and trajectory from one of our planner’s successful trials using a 60[$$^{\circ }$$] field-of-view. The close-up view of the third turn illustrates a characteristic example of the good visibility of free space our planner was able to obtain, hence allowing it to maintain a greater speed around the turn. Table 1. Experimental results. +-------------------+----------+---------+-----------------------------------+-------------------------------+------------------------------+ | FOV | Planner | Success | Time-to-Goal (s) (successes only) | Mean Speed (m/s) (all trials) | Max Speed (m/s) (all trials) | +:==================+:=========+:========+:==================================+:==============================+:=============================+ | 60[$$^{\circ }$$] | Baseline | 0/10 | N/A | 2.25 | 4.29 | +-------------------+----------+---------+-----------------------------------+-------------------------------+------------------------------+ | | Learned | 8/10 | 14.36 | 3.08 | 5.35 | +-------------------+----------+---------+-----------------------------------+-------------------------------+------------------------------+ | 90[$$^{\circ }$$] | Baseline | 4/10 | 14.26 | 3.06 | 5.30 | +-------------------+----------+---------+-----------------------------------+-------------------------------+------------------------------+ | | Learned | 10/10 | 13.68 | 3.43 | 5.55 | +-------------------+----------+---------+-----------------------------------+-------------------------------+------------------------------+ With a 60[$$^{\circ }$$] field-of-view, our planner succeeded in 8 of 10 trials, while the baseline planner failed in every trial. Figure 7b shows a characteristic failure by the baseline planner, becoming trapped as discussed in Sect. 3.2. The success rates and relative speeds listed in Table 1 roughly match our simulation results for 60[$$^{\circ }$$] and 2 m hallway width, with difference likely resulting from the fact that the experimental hallway was slightly narrower than 2 m. Using a 90[$$^{\circ }$$] field-of-view, our planner succeeded in all 10 trials, while the baseline planner succeeded in only 4. This success rate for the baseline planner is considerably worse than the simulation results, again likely due to the narrower hallway, but our approach is robust to these differences. For those trials that did succeed, Table 1 shows that our planner reached the goal in 4% less time than the baseline planner, which is consistent with our simulation results. 4 Related Work The planning literature has addressed the problem of safety for agile autonomous vehicles in partially-known environments [1, 3, 4, 9, 11, 14]. However, these results largely do not consider the contingency between future sensor measurements and action choices, which we target in this work. POMDP algorithms explicitly address this action-observation contingency, but require knowledge of the environment distribution and even approximate methods are generally intractable online [5, 6]. Several methods have efficiently captured the action-observation contingency using a discrete or topological abstraction [7, 12]. However, these methods do not readily translate to realistic sensor measurements or low-level vehicle dynamics. Planning of sensor viewpoints can be found in the exploration literature [8, 13, 15], where the objective is to build a more complete or accurate map. However, he exploration literature assigns a fundamentally different value to measurements than the time-to-goal value we have used in this work. 5 Conclusion Our results are surprising in several respects. First, we did not anticipate the failure mode of robots becoming trapped in a small area of known free space, unable to make progress toward the goal. Given how often this failure mode occurs under common conditions, it warrants a solution such as the one we provide in this work. Second, we expected a more substantial benefit using our refined heuristic across a wider range of fields-of-view. Instead, there is little benefit for fields-of-view greater than 80[$$^{\circ }$$], as the resulting speed improvement does not outweigh the extra distance traveled to obtain better visibility. Ultimately navigation speed is limited by stopping distance, which grows quadratically with the speed of the robot. Therefore, to increase speed, the length of known free space must grow quadratically as well. However, our learned guidance function was able to produce only a modest increase in sensor visibility. Some possibilities for future work might be to quantify a bound on the maximum possible benefit that could be gained from the optimal policy, to explore the characteristics of the environment that affect that bound, and to implement or learn a richer set of predictive features to help get closer to the optimal policy. References 1. Arora, S., et al.: Emergency maneuver library-ensuring safe navigation in partially known environments. In Proceedings of the ICRA (2015) 2. Bachrach, A., et al.: RANGE - robust autonomous navigation in GPS-denied environments. J. Field Rob. 28(5), 644–666 (2011)CrossRef 3. Bekris, K.E., Kavraki, L.E.: Greedy but safe replanning under kinodynamic constraints. In: Proceedings of the ICRA (2007) 4. Fraichard, T., Asama, H.: Inevitable collision states-a step toward safer robots? Adv. Rob. 18(10), 1001–1024 (2004)CrossRef 5. Kaelbling, L.P., et al.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1), 99–134 (1998)MathSciNetCrossRefMATH 6. Kurniawati, H., et al.: SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In: Proceedings of the RSS (2008) 7. Likhachev, M., Stentz, A.: PCPP: Efficient probabilistic planning with clear preferences in partially-known environments. In: Proceedings of the AAAI (2006) 8. Makarenko, A.A., et al.: An experiment in integrated exploration. In: Proceedings of the IROS (2002) 9. Richter, C., et al.: Bayesian learning for high-speed navigation in unknown environments. In: Proceedings of the ISRR (2015) 10. C. Richter et al. Markov chain hallway and Poisson forest environment generating distributions. Technical Report MIT-CSAIL-TR-2015-014 (2015) 11. Schouwenaars, T., et al.: Receding horizon path planning with implicit safety guarantees. In: Proceedings of the ACC (2004) 12. Simmons, R., Koenig, S.: Probabilistic robot navigation in partially observable environments. In: Proceedings of the IJCAI (1995) 13. Stachniss, C., et al.: Information gain-based exploration using Rao-Blackwellized particle filters. In: Proceedings of the RSS (2005) 14. Watterson, M., Kumar, V.: Safe receding horizon control for aggressive MAV flight with limited range sensing. In: Proceedings of the IROS (2015) 15. Yamauchi, B.: A frontier-based approach for autonomous exploration. In: Proceedings of the Computational Intelligence in Robotics and Automation (1997) Footnotes 1 Following the POMDP problem formulation developed in [9], this true optimal expected cost would be defined as [$$h^\*(b\_t,a\_t) = \sum \_{s\_{t+1}}P(s\_{t+1}|b\_t,a\_t)V^\*(s\_{t+1})$$]. We omit a detailed discussion of the POMDP formulation of this problem for brevity, but note that it provides the basis for our mathematical approach and approximations.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_35 Parallel Manipulation of Millirobot Swarms Using Projected Light Fields Christopher Lawler¹ , Ivan Penskiy¹, Aaron Sirken¹ and Sarah Bergbreiter¹ (1) Department of Mechanical Engineering, University of Maryland, College Park, Maryland 20742, USA   Abstract This paper introduces a method to form global patterns of 10+ autonomous millirobots, the Tiny Terrestrial Robot Platforms (TinyTeRPs), with only local information available to each robot. The TinyTeRPs are equipped with light sensors that measure a globally projected dynamic light field, and radios for RSSI inter-robot distance estimation. An agent-based simulation is used to extrapolate the experimentally observed behavior in swarms of up to 225 robots. In concert, a diffusion-inspired analytic model is presented to create closed-form equations for estimating the time it will take to form a pattern. This allows the behavior to be characterized using an algorithmic complexity framework. With this approach, forming patterns can be done in time scaling linearly with the number of robots in the pattern. Furthermore, the models predict that no matter how large swarm size grows, patterns can be formed in constant time as long as only a fixed fraction of the swarm needs to succeed. Insights into the reasons for this behavior, characterizations of the sensors and their interactions, and suggestions to build on this work are included. C. Lawler—Designated speaker 1 Introduction In nature, ants coordinate tasks, form large-scale structures, and jointly manipulate objects, all using only local stimulation and signal exchange via antennae and pheromones [1]. The emergence inherent in these bottom-up intelligences is of interest, and the mapping between the rules each agent follows and the collective behavior they generate is under-explored [2]. Previous work on generalizable global pattern formation has required the robots to have a notion of the global pattern they are supposed to form [3], global coordinates of where they are in the space, or a central computer to calculate trajectories even if the sensing is local [4]. Global patterns can be achieved with local inputs in limited cases by exploiting carefully planned differences between each member of the swarm, which grows infeasible as the swarm size increases [5]. Self-assembly of 16 physical and 100 simulated centimeter-scale robots to a non-global pattern has also been demonstrated [6]. Light fields have been proposed as a method of simulating pheromone-based local interactions until true chemical-based systems can be developed [7]. The millirobot ALICE has previously been shown to follow and produce light trails within a confined arena [8]. In the following experiments, light fields are used to simulate interaction with the environment, and radio signal strength indication (RSSI) is used for robot-to-robot interaction; elsewhere the latter has been accomplished through other means such as infrared signals [9]. The work in this paper experimentally demonstrates rapid control of 10 simple and robust autonomous millirobots by projecting global light fields. The robots use a single vertically-oriented light sensor to simultaneously follow gradients towards brighter areas which constitute the desired pattern and track the pattern as it changes over time, as sketched in Fig. 1. Using these simple local rules, global patterns are formed in linear time complexity with respect to swarm size, matching an analytic model, with a characteristic time of seconds. These experiments validate simulations, which extrapolate that behavior in up to 225 robots. This concept can be generalized to any two-dimensional field as long as the robots have sensors to detect these fields. Ultimately, the idea could be generalized to 3D fields as well, such as an RF field or thermal signatures of survivors trapped in rubble. Another target application for this approach is assembly, where one could achieve results as in [2] without the need for a compiler beforehand or internal representation of the target structure. To do this, the robots could create pheromone hotspots for each other to follow, and the shape of the structure would be an emergent property of the local rules. [] Fig. 1. Schematic of robot swarm controlled by time and spatially varying field. Robots form initial shape U (first panel), the light pattern changes to form M (second), and the robots re-form new pattern (third). 2 Technical Approach 2.1 Spatial and Time Varying Field Light is projected onto the floor to create an arena and its intensity varies with space and time by playing a video or switching between Powerpoint slides. The patterns in the field consist of spots in which centers are brighter than the threshold compared to in lines 5 and 9 of Algorithm 1, which each robot runs. A light gradient is provided around the spots to help steer the robots to the spots. [] Fig. 2. Twelve TinyTeRPs, the robots used in this work. 2.2 Robots Hardware. The robots used in this work are the Tiny Terrestrial Robotic Platforms (TinyTeRPs), originally introduced in [10] and pictured in Fig. 2. They each have processors, a radio, a six-axis inertial sensor, a new vertically oriented ams AG TSL237T light-to-frequency sensor, and a 3 cm x 3.5 cm treaded chassis. Treads allow the robots to traverse multiple terrains, not just flat surfaces. The modular design of the TinyTeRP made it simple to add the light sensor. Software. The algorithm each robot runs is inspired by chemical binding of agents and sites, or ants foraging for a new food source. Each robot uses the gradient ascent routine from [11] to move to the nearest unoccupied site - a random walk was also tested, but its non-directionality increases travel time and probability of error. Each robot continually broadcasts an empty message over radio every 40 ms, whose strength neighboring robots record. If the detected RSSI value is higher than a set “collision RSSI threshold,” it indicates that the two robots are occupying the same space and it needs to change course. A “spot threshold” is used to determine if the robot is in a projected light spot and the “boundary threshold” determines if the robot has exited the projected arena. Robot speed is set at run time and is not currently changed through the course of the algorithm. This method is formally described in Algorithm 1. [] 2.3 Parameter Selection The performance of Algorithm 1 depends heavily on the choice of parameters such as the collision RSSI threshold, the light thresholds to detect target spots and boundaries, and robot speed, and there are trade-offs. For one, the routine relies on the TinyTeRP being fast enough to back out of an occupied site before gradient ascent re-engages. A higher speed means each robot can search more space in the same amount of time but also means a TinyTeRP is more likely to collide with a neighbor before it can process a strong enough radio proximity signal. One way of mitigating this is to lower the collision RSSI threshold. To quantify this trade-off between robot speed and radio threshold, the probability of collision avoidance was measured for 15 different points in the speed-RSSI threshold space. One robot was directed on a collision course with a second stationary one 20 times. The percentage of successful crash avoidance is displayed in Fig. 3, where a value of 1 indicates that the robots never collided. [] Fig. 3. Collision avoidance success with respect to speed/threshold trade-off. The red star marks the operating conditions for this paper. However, the RSSI threshold cannot be arbitrarily low; if the exclusion radius is wider than the spot, robots will be excluded from areas that need to be traversed to reach open spots. Spot size could be modified, at the risk of shrinking open corridors between spots for a fixed arena size. For balance, the TinyTeRPs operated at a speed of 40–45 cm/s and a -46 dBm collision RSSI threshold. 2.4 Predicting Time to Form Pattern A closed-form equation, based on chemical kinetics rate equations, to predict self-assembly scaling is discussed in [12], which includes three necessary assumptions: 1. 1. Robots are well-mixed and eventually traverse entire space   2. 2. Assembly rate is time- and density-invariant   3. 3. Only two robots can collide at once   The TinyTeRPs move in a quasi-random walk. As long as the robot density is low enough, there are traversable paths between the projected spots of light, and robots are not incorrectly turned away from open spots by nearby non-settled robots, these assumptions are met with high probability. Accordingly, the differential model introduced in [13] is used: [$$\begin{aligned} \frac{dx}{dt} = \frac{(N-x)^2}{\tau } \implies x(t) = \frac{N}{1+\frac{\tau }{tN}} \end{aligned}$$] (1) where: - x is the number of robots in the formation - N is the total number of robots, the same as the number of spots - [$$\tau $$] is the average time it takes for a single robot to stop at a spot Assuming pattern envelope size scales linearly with the number of robots, for a fixed robot speed it takes longer to get to a further spot, so [$$\tau $$] varies with N: [$$\begin{aligned} x = N - R = fN \implies \frac{t}{T} = \frac{(N-R)}{R} = \frac{f}{1-f} \end{aligned}$$] (2) where: - R is the number of robots not in the pattern - f is the fraction of robots in the pattern - [$$\tau = TN$$], such that T is a characteristic time related only to the speed of the robot and the algorithm it runs This model predicts that t = O([$$e^f$$]) as [$$f \rightarrow 1$$] for fixed N and [$$t = O(N^0)$$] for fixed f. If the number of remaining robots R is fixed, then [$$t = O(N)$$]. 3 Experiments 3.1 Experimental Set up Equipment and Procedure. The robots are placed in a 122 cm x 107 cm arena with a projector and camera suspended above, shown in Fig. 4a. The projector can display patterns of different maximum size by adjusting its height from 40 cm off the arena surface to 160 cm, in increments of 15 cm. For each test, the robots and spots were placed according to the set of conditions being tested, as described in the section below. For swarms of sizes 4, 7, and 10, the time it takes for each successive robot in that swarm to reach a spot is recorded. Tests where the RSSI proximity detection failed and robots collided were excluded. [] Fig. 4. Modes of investigation Initial Conditions. The number of spots is equal to the number of robots placed in the arena. The size of the pattern projected is linear with the number of spots, which implies that the assumption made to arrive at Eq. 2 is valid. It also means that while increasing N, spot size and robot movement can be kept the same while not overcrowding the arena and violating Assumption 1. For these tests, the spots are arranged in an evenly spaced grid, with gaps at the ends if N is not a perfect square. In one set of initial conditions, depicted in Fig. 5a, the robots are distributed randomly throughout the arena before the pattern is projected. In the second, depicted in Fig. 5b, the robots are spaced evenly along the long edge of the arena, as if at the exposed edge of a space needing to be explored. While placing the robots randomly does not undercut any of the three assumptions, placing them all in close proximity to start is more likely to result in multi-body collisions and radio exclusion zone conflicts. The second set of tests is designed to discern how these violations alter the collective behavior, and whether the same model can still be used. [] Fig. 5. 4-robot test location initial conditions. Pictures are overexposed because they capture the instant after the pattern is illuminated 3.2 Simulation A simulation of the experiment described above was written in Java to allow for exploration of behavior for larger numbers of robots. The program has a visualization module for watching tests unfold, a still of which is included as Fig. 4b. A motion and sensing model of the TinyTeRPs along with a 0.1 mm resolution model of the arena was created, and the parameters for the motion model were fit with the experimental results presented below. The motion model had random terms to account for hardware variation, so for each swarm size the model was run 100 times. The averages and standard deviations of these trials are presented. 4 Results 4.1 Random Initial Conditions Data gathered in the experiments with random initial positions, as described in Sect. 3, are presented in Fig. 6. Figure 6a shows an example set of tracks created by TEMA motion analysis software with the robots in their final positions. Figure 6b-d display different dimensions of the 4, 7, and 10 robot trial results. They present the average and standard deviation of 5 experimental runs as each point, as well as simulation results and analytic model predictions. The only free parameter of the model, T, was fit using the results of the 7 robot constant-N test, and was kept at that value, 2 s, for predicting all other results. Figure 6b shows a linear trend for assembly time as N increases; in this case the number of robots not assembled, R, is fixed at 2. If N is fixed at 10, the time for each successive robot to fix itself in an assembly spot grows exponentially as shown in Fig. 6c. Finally, Fig. 6d shows that in simulation, for constant [$$f = 0.9$$] the time to assemble does not increase with N. [] Fig. 6. Results when robot initial positions distributed randomly 4.2 Non-random Initial Conditions Data gathered in the experiments with non-random initial conditions are presented in Fig. 7. Figure 7a presents the diffusion analytic model with two values of parameter T, along with the averages and standard deviations of 5 experimental trials and 100 simulations, of the time it takes each robot to reach a spot in a four-robot pattern. Figure 7b plots simulated results against the models for how long it takes for all but two robots to form the pattern for increasing swarm size. For both graphs, the first value of T, 2 s, is the same as was used for the random initial condition tests, and the second, 2.8 s, was fit using the experimental results displayed in Fig. 7a. [] Fig. 7. Results when robots placed along edge of arena 5 Experimental Insights When starting the robots at random locations, formation time appears to scale linearly with swarm size N for constant remaining robots R, and the model agrees with the experimental result to within one standard deviation. This is the same asymptotic behavior observed for the Kilobots that had a global image of the pattern, but a serial control algorithm and slower movement speed: 0.3 body lengths per second [9] as opposed to 17 for the TinyTeRPs. These speed and concurrency disadvantages contributed to an assembly time per Kilobot of about 43 s [3], whereas the TinyTeRPs operating in parallel had a characteristic time T, and hence slope of t / N graph for fixed R, of 2 s. In addition to fitting Eq. 2, experiments have validated the simulation as demonstrated in Figs. 6b and c. This simulation indicates that for constant f, robots running Algorithm 1 will assemble in constant time even as N grows arbitrarily large, as shown in Fig. 6d. This result underscores the importance of using parallel algorithms for swarming tasks, which necessitates concurrency management systems such as this experiment’s RSSI-based collision detection. In these experiments, as assembly time increased, probability of individual agent failure rose. A swarm with dozens of inexpensive agents should be robust to the likely failure of a small fraction of agents. Ensuring this property means one can take advantage of the fact that it takes a much smaller, and possibly swarm-size-invariant, amount of time for 90 % or less of TinyTeRPs to form a pattern than it does for all of them to find open sites, as shown in Fig. 6d. For both experimental and simulated results, variance grows with f. The variation is due to the randomness built into the robot’s movements. This divergence builds up as each experiment goes on, and is magnified by the length of time it takes for the last robots to find spots. Figure 6c illustrates both trends. No simulated result is displayed for [$$f\*N = 10$$] because at 50 s the average would make the other points less legible, and no experimental results are shown for [$$f\*N = 9$$] or 10 because the current generation of TinyTeRPs cannot repeatably create that large a pattern without at least one failing. The analytic model is continuous and so cannot make predictions around f = 1. Similarly in Fig. 6d, the second point’s average and deviation is misleadingly high because the number of robots necessary to be 90 % of the total was rounded up, so 9/9 or [$$f = 1$$] were required to be in the pattern instead of 203/225 or [$$f = 0.902$$]. The first experiment’s assumption of random initial robot placements demonstrates that the method does not depend on any artificial configuration, but that is unlikely for proposed applications. However, randomness does guarantee uniformity, whereas application-inspired positions may be too dense. Figure 7a argues that when there are only four robots on the edge of the arena so they can start outside of each others’ radio exclusion zones, their behavior appears very similar to when they started in random locations. The same analytic model still predicts the experimental times well, though a larger T might be necessary because on average, robots start further from a closest exclusive spot. However, in this experiment the size of the arena’s edge scales only with sqrt(N), so as N increases the edge gets increasingly crowded until more robots cannot fit. As seen in Fig. 7b, as robots get denser it takes them longer to form the pattern than the model would predict, indicating that this configuration now violates the assumptions needed for the model to be accurate. These more realistic initial condition results indicate that while this approach and model are useful for examining scaling behavior of swarm assembly, in future applications care will need to be taken to avoid clumping the robots too densely. Acknowledgments This work is supported by NSF Award ECCS1446785 and UMD SEEDS and RISE undergraduate research fellowships. References 1. Camazine, S., Deneubourg, J.-L., Franks, N., Sneyd, J., Theraulaz, G., Bonabeau, E.: Self-Organization in Biological Systems. Princeton U.P., Princeton (2003)MATH 2. Werfel, J., Petersen, K., Nagpal, R.: Designing collective behavior in a termite-inspired robot construction team. Science 343, 754–758 (2014)CrossRef 3. Rubenstein, M., Cornejo, A., Nagpal, R.: Programmable self-assembly in a thousand-robot swarm. Science 345, 795–799 (2014)CrossRef 4. Lee, S.G., Diaz-Mercado, Y., Egerstedt, M.: Multirobot control using time-varying density functions. IEEE Trans. Robot. 31, 489–493 (2015)CrossRef 5. Becker, A., Onyuksel, C., Bretl, T.: Feedback control of many differential-drive robots with uniform control inputs. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2256–2262, October 2012 6. Gross, R., Bonani, M., Mondada, F., Dorigo, M.: Autonomous self-assembly in swarm-bots. IEEE Trans. Robot. 22, 1115–1130 (2006)CrossRef 7. Sugawara, K., Kazama, T., Watanabe, T.: Foraging behavior of interacting robots with virtual pheromone. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004 Proceedings, vol. 3, pp. 3074–3079 (2004) 8. Garnier, S., Tache, F., Combe, M., Grimal, A., Theraulaz, G.: Alice in pheromone land: an experimental setup for the study of ant-like robots. In: Swarm Intelligence Symposium, SIS 2007, pp. 37–44. IEEE, April 2007 9. Rubenstein, M., Ahler, C., Nagpal, R.: Kilobot: a low cost scalable robot system for collective behaviors. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 3293–3298, May 2012 10. Sabelhaus, A.P., Mirsky, D., Hill, L.M., Martins, N.C., Bergbreiter, S., Tinyterp: a tiny terrestrial robotic platform with modular sensing. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 2600–2605, May 2013 11. Jang, H.B., Villalba, R.D., Paley, D., Bergbreiter, S.: Reu: Rssi-based rendezvous on the tiny terrestrial robotic platform (tinyterp), Technical report, Institute for Systems Research (2013) 12. Mastrangeli, M., Abbasi, S., Varel, C., Hoof, C.V., Celis, J.-P., Bhringer, K.F.: Self-assembly from milli- to nanoscales: methods and applications. J. Micromech. Microeng. 19(8), 083001 (2009)CrossRef 13. Zheng, W., Jacobs, H.: Fabrication of multicomponent microsystems by directed three-dimensional self-assembly. Adv. Funct. Mater. 15(5), 732–738 (2005)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_36 Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation Lee Clement¹  , Valentin Peretroukhin¹   and Jonathan Kelly¹   (1) Institute for Aerospace Studies, University of Toronto, Toronto, Canada     Lee Clement (Corresponding author) Email: lee.clement@mail.utoronto.ca   Valentin Peretroukhin Email: v.peretroukhin@mail.utoronto.ca   Jonathan Kelly Email: j.kelly@utias.utoronto.ca Abstract In the absence of reliable and accurate GPS, visual odometry (VO) has emerged as an effective means of estimating the egomotion of robotic vehicles. Like any dead-reckoning technique, VO suffers from unbounded accumulation of drift error over time, but this accumulation can be limited by incorporating absolute orientation information from, for example, a sun sensor. In this paper, we leverage recent work on visual outdoor illumination estimation to show that estimation error in a stereo VO pipeline can be reduced by inferring the sun position from the same image stream used to compute VO, thereby gaining the benefits of sun sensing without requiring a dedicated sun sensor or the sun to be visible to the camera. We compare sun estimation methods based on hand-crafted visual cues and Convolutional Neural Networks (CNNs) and demonstrate our approach on a combined 7.8 Km of urban driving from the popular KITTI dataset, achieving up to a 43 % reduction in translational average root mean squared error (ARMSE) and a 59 % reduction in final translational drift error compared to pure VO alone. Keywords Visual OdometryIllumination estimationSun sensingRobot navigation 1 Motivation, Problem Statement, and Related Work In the absence of reliable and accurate GPS, visual odometry (VO) has emerged as an effective means of estimating the egomotion of robotic vehicles as they navigate through their environment. While VO is generally less prone to drift than other dead-reckoning techniques such as wheel odometry, any dead-reckoning algorithm will inevitably accumulate drift over time due to the compounding of small estimation errors. Indeed, VO suffers from superlinear growth of drift error with distance travelled, mainly due to error in the orientation estimates [14]. Fortunately, the addition of absolute orientation information from, for example, a sun sensor can restrict this growth to be linear [14]. The sun is an appealing source of absolute orientation information since it is readily detectable and its apparent motion through the sky is well characterized in ephemeris tables. The benefits of deriving orientation information from a sun sensor have been successfully demonstrated in planetary analogue environments [6, 11] as well as on board the Mars Exploration Rovers (MERs) [3, 13]. In particular, Lambert et al. [11] showed that by incorporating sun sensor and inclinometer data directly in a stereo VO pipeline, the accumulated drift error can be greatly reduced compared to pure VO alone. In this work, we seek to answer the question of whether similar reductions in stereo VO drift can be obtained solely from the image stream already being used to compute VO. The main idea here is that by reasoning over more than just the geometric information available from a standard RGB camera, we can improve existing VO techniques without needing to rely on a dedicated sun sensor or specially oriented camera. Recently, Lalonde et al. [10] demonstrated that the likely direction of the sun can be estimated from a single RGB image using a combination of weak visual cues such as shadows and a model of the sky [15]. We improve the accuracy and reliability of this technique by incorporating information from the VO estimate itself, and combine it with a modified version of the sun-sensor-augmented stereo VO pipeline developed by Lambert et al. [11] to show that VO drift error can be reduced in this way. We also investigate the use of a recent machine learning approach to sun direction estimation, which makes use of a Convolutional Neural Network (CNN) to predict the azimuth angle of the sun [12]. We present experimental results demonstrating our approach on a combined 7.8 Km of urban driving from the popular KITTI dataset [7], achieving up to a 59 % reduction in final translational drift error and a 43 % reduction in translational average root mean squared error (ARMSE) compared to pure VO. 2 Technical Approach We adopt a sliding window stereo VO technique that has been used in a number of successful mobile robotics applications [2, 5, 8, 9]. While this technique is not the absolute state of the art,¹ it serves as an easily implementable baseline system against which to evaluate our use of visual illumination estimation in the VO pipeline. We stress that our main idea is not tied to any specific VO technique and could be used in any VO system where RGB images are available. Our goal is to estimate a window of [$$SE(3)$$] poses [$$\left\{ \mathbf {T}\_{k+1,b}, \dots , \mathbf {T}\_{k+N,b}\right\} $$] expressed in a base coordinate frame [$$\underrightarrow{\mathcal {F}}\_{b}$$], which we choose to be the first pose in each window. Our VO pipeline tracks keypoints across pairs of stereo images and computes an initial guess for each pose in the window using frame-to-frame point cloud alignment, which it then refines using a local bundle adjustment over the window. Finally, the estimated camera trajectory can be transformed into a desired world coordinate frame [$$\underrightarrow{\mathcal {F}}\_{w}$$] given the transformation [$$\mathbf {T}\_{b,w}$$], which can be obtained from the bundle adjustment solution of the previous window. As we discuss in Sect. 2.3, we select the initial pose [$$\mathbf {T}\_{1,w}$$] to be the first GPS ground truth pose such that [$$\underrightarrow{\mathcal {F}}\_{w}$$] is a local East-North-Up (ENU) coordinate system. 2.1 Observation Model We assume that our stereo images have been de-warped and rectified in a pre-processing step, and model the stereo camera as a pair of perfect pinhole cameras with focal lengths [$$f\_u, f\_v$$] and principal points [$$\left( c\_u,c\_v\right) $$], separated by a fixed and known baseline [$$\ell $$]. If we take [$$\mathbf {p}\_b^j$$] to be the homogeneous 3D coordinates of keypoint j, expressed in our chosen base frame [$$\underrightarrow{\mathcal {F}}\_{b}$$], we can transform the keypoint into the camera frame at pose k to obtain [$$\mathbf {p}\_k^j = \mathbf {T}\_{k,b}\mathbf {p}\_b^j = \begin{bmatrix}p\_{k,x}^j&p\_{k,y}^j&p\_{k,z}^j&1 \end{bmatrix}^T$$]. Our observation model [$$\mathbf {g}\left( \cdot \right) $$] can then be formulated as [] (1) where [$$\left( u,v\right) $$] are the pixel coordinates in the left image and d is the disparity. 2.2 Sliding-window Visual Odometry We use the open-source libviso2 package [8] to detect and track keypoints between stereo image pairs. Based on these keypoint tracks, a three-point Random Sample Consensus (RANSAC) algorithm [4] generates an initial guess of the interframe motion and rejects outlier keypoint tracks by thresholding their reprojection error. We compound these pose-to-pose transformation estimates through our chosen window and refine them using a local bundle adjustment, which we solve using the nonlinear least-squares solver Ceres [1]. The objective function to be minimized can be written as [$$\begin{aligned} \mathcal {J} = \sum \_{k} \sum \_{j} \mathbf {e}\_{\mathbf {y}\_{k,j}}^T \mathbf {R}^{-1}\_{\mathbf {y}\_{k,j}} \mathbf {e}\_{\mathbf {y}\_{k,j}}, \end{aligned}$$] (2) where [$$\mathbf {e}\_{\mathbf {y}\_{k,j}} = \hat{\mathbf {y}}\_{k,j} - \mathbf {y}\_{k,j}$$] is the reprojection error of keypoint j for camera pose k, [$$\mathbf {R}\_{\mathbf {y}\_{k,j}}$$] is the covariance of the errors, and the outer sum runs over the chosen window of poses. The predicted measurements are given by [$$\hat{\mathbf {y}}\_{k,j} = \mathbf {g}\left( \hat{\mathbf {T}}\_{k,b} \hat{\mathbf {p}}^j\_{b}\right) $$], where [$$\hat{\mathbf {T}}\_{k,b}$$] and [$$\hat{\mathbf {p}}^j\_{b}$$] are the estimated poses and keypoint positions in base frame [$$\underrightarrow{\mathcal {F}}\_{b}$$], which we choose to be the first camera frame in the window. 2.3 Orientation Correction In order to combat drift in the VO estimate produced by accumulated orientation error, we adopt the technique of Lambert et al. [11] to incorporate absolute orientation information from the sun directly into the estimation problem. We assume the initial camera pose and its timestamp are available from GPS and use them to determine the global direction of the sun [$$\mathbf {s}\_w$$], expressed as a 3D unit vector, from ephemeris data, where we have defined the world frame [$$\underrightarrow{\mathcal {F}}\_{w}$$] to be a local ENU coordinate frame. For simplicity, we assume that the full trajectory of the camera is sufficiently short so that the sun is effectively static, although it would be straightforward to obtain the global sun direction at each timestep for longer trajectories where the apparent motion of the sun is significant. By transforming the global sun direction into each camera frame [$$\underrightarrow{\mathcal {F}}\_{k}$$] in the window, we obtain predicted sun directions [$$\hat{\mathbf {s}}\_k = \hat{\mathbf {T}}\_{k,b} \mathbf {T}\_{b,w} \mathbf {s}\_w$$], where [$$\hat{\mathbf {T}}\_{k,b}$$] is the current estimate of camera pose k in the base frame, and [$$\mathbf {T}\_{b,w}$$] is the fixed, previously estimated transformation from the world frame to the base frame. We compare the predicted and estimated sun directions to introduce an additional error term into the bundle adjustment cost function (cf. Eq. (2)): [$$\begin{aligned} \mathcal {J} = \sum \_{k} \left( \sum \_{j} \mathbf {e}\_{\mathbf {y}\_{k,j}}^T \mathbf {R}^{-1}\_{\mathbf {y}\_{k,j}} \mathbf {e}\_{\mathbf {y}\_{k,j}} + \mathbf {e}\_{\mathbf {s}\_k}^T \mathbf {R}^{-1}\_{\mathbf {s}\_k} \mathbf {e}\_{\mathbf {s}\_k} \right) , \end{aligned}$$] (3) where [$$\mathbf {e}\_{\mathbf {s}\_k} = \hat{\mathbf {s}}\_k - \mathbf {s}\_k$$] is the error in the predicted sun direction, and [$$\mathbf {R}\_{\mathbf {s}\_k}$$] is the covariance of the errors. This additional term constrains the orientation of the camera, which helps limit drift in the VO result due to orientation error [11]. In contrast to [11], we operate directly on the 3D unit sun vectors rather than the underlying two angular degrees of freedom. While we could also use cosine distance as the error term in our cost function, in our Ceres-based implementation we found that using a Euclidean error term improved the problem’s convergence properties. This is likely because the distribution of cosine distances is not well described by a zero-mean Gaussian distribution (see Fig. 2). In principle, Eqs. (2) and (3) could include an additional term to account for uncertainty in the transformation [$$\mathbf {T}\_{b,w}$$], which was previously an estimated quantity. Although the omission of this term means that our estimator may be under-confident in the sun measurements for certain segments of the trajectory, we found that a well chosen static covariance on the sun measurements nevertheless produced good results in practice. We therefore defer an investigation of this more principled uncertainty propagation to future work. 2.4 Visual Illumination Estimation While Lambert et al. [11] make use of a hardware sun sensor to estimate the direction of the sun relative to the vehicle, in our approach we wish to use the existing RGB image stream to compute this illumination information in addition to the motion of the camera. We examine three techniques for estimating the sun direction in a single outdoor RGB image: the technique of Lalonde et al. [10], which estimates the sun direction based on a combination of weak visual cues; an improved version of [10] that makes use of a novel VO-informed prior term to improve its accuracy and reliability; and Sun-CNN, a recent technique for estimating the sun direction using a Convolutional Neural Network (CNN) [12]. “Lalonde” [10] estimates the maximum likelihood azimuth-zenith sun direction in a single RGB image by combining relatively weak information from a physically based sky model [15], shadow detection, pedestrian detection, and vertical surface detection routines, as well as a data-driven prior term that captures the distribution of typical sun zeniths in photographs. An implementation of this technique is freely available as open-source software.² For our purposes, we use only a subset of these visual cues since the others tended to produce erroneous or null results in our experiments. Specifically, we use the sky model, shadow detection, and prior term described in [10]. Figure 1a shows an example of the results we obtained using this method. Note that in this case the algorithm produced an incorrect sun detection due to the bimodal ambiguity in the shadow cue and the symmetry of the sky model and prior term. [] Fig. 1. Sample frame from KITTI sequence 2011\_09\_30\_drive\_0018 and associated sun detection results using the “Lalonde” [10] and “Lalonde-VO” methods. Top row: Probability distributions over sun positions are shown for each visual cue independently, and for the combined result. The maximum likelihood solution(s) are represented as yellow circles, and the camera’s field of view is shown in black. Bottom row: A virtual sundial (red line) is inserted in the image and casts a virtual shadow (black line) using the detected sun position. Since [10] tends to fail in the presence of ambiguous shadows and saturated sky pixels, we reject obvious outliers in our VO pipeline by thresholding the cosine distance between the observed and predicted sun directions based on the current pose estimate. In practice, we found a cosine distance threshold of 0.3 to be a reasonable choice. However, as shown in Fig. 2a, the distribution of zenith errors is skewed. This is due to the bias introduced by the prior term of [10], which fails to correctly capture the distribution of sun zeniths in the KITTI dataset. We resolve this issue by thresholding the zenith error (or, equivalently, the y-component error in the camera frame) to exclude the skewed portion of the distribution, yielding a more Gaussian-like distribution over zenith errors. “Lalonde-VO” is a modified version of [10] where we have replaced the original zenith-only prior term with a novel prior term that incorporates the expected sun direction based on the current VO estimate. The motivation for incorporating this information is twofold. First, in cases where the sky cue fails, the shadow cue’s bimodal probability distribution forces the algorithm to choose one of the two possible solutions at random, leading to a high proportion of erroneous measurements (Fig. 1a). By incorporating a weak prior based on the estimated camera pose, we can resolve the ambiguity in the two solutions (Fig. 1b). Second, ambiguous shadow cues often result in an incorrect pair of maximum likelihood sun azimuths, yet there is typically a secondary pair of local maxima with lower probability that are in fact correct. The sky cue alone is not generally strong enough to bias the result towards the correct direction in these cases, but our new VO-informed prior term allows the algorithm to ignore incorrect shadow orientations and incorporate information from the weaker pair of maxima. [] Fig. 2. Distribution of estimation errors for [10] relative to the ground truth sun vector transformed through the chain of ground truth camera poses. We use a cosine distance threshold of 0.3 to reject outlier estimates. We define our VO-informed prior term as a Gaussian distribution over azimuth and zenith angles whose mean is the expected sun direction, and choose the covariance of this distribution such that the [$$3\sigma $$] bounds on the azimuth prior span [$$360^\circ $$], while the [$$3\sigma $$] bounds on the zenith prior span [$$90^\circ $$]. In this way, we account for uncertainty in the camera poses and avoid excessively biasing the sun detection; we need only bias the result towards the correct ‘half’ of the sky. “Sun-CNN” [12] uses a Convolutional Neural Network (CNN) trained on sequences from the KITTI dataset [7] annotated with ground truth sun directions to estimate the likely azimuth angle of the sun from a single RGB image. Ma et al. [12] show that Sun-CNN substantially outperforms [10] in terms of azimuth estimation accuracy on the KITTI odometry benchmark, but since it does not estimate the zenith angle of the sun, it is best suited to planar navigation tasks such as autonomous driving of land vehicles. Since our sun-corrected VO pipeline requires the full 3D direction of the sun relative to the camera, we assign a value of zero and a large covariance to the vertical component of the Sun-CNN estimate so that the unknown component of the sun direction is effectively ignored. [] Fig. 3. Sample frames from five sequences from the KITTI raw dataset [7], ranging in length from 300 m to 3.7 Km. These sequences contain strong shadows and mostly unsaturated skies, which are amenable to visual sun direction estimation. 3 Results We present results for a combined 7.8 Km of urban driving from the popular KITTI dataset [7] using a two-frame sliding window and estimated sun directions from each algorithm for every fifth image. Figure 3 shows sample frames from five sequences in the KITTI raw dataset, ranging in length from 300 m to 3.7 Km, which we selected mainly for their strong shadows and unsaturated sky pixels. We evaluate the translational and rotational average root mean squared error (ARMSE) and the final translational drift error of our VO algorithm, both with and without the sun-based orientation correction. We processed each sequence using the same set of stereo feature tracks obtained from libviso2 [8], first using pure VO alone, then by incorporating measurements from each sun detection method in turn. The covariances associated with each sun detection algorithm were individually tuned to reflect the measurement error distribution of each algorithm, and we made a bona fide effort to present the best performance of each algorithm on each sequence. Figure 4 shows the estimated and ground truth trajectories for the 2.2 Km sequence 2011\_09\_30\_drive\_0018. With the exception of the “Lalonde” method, the sun-aided VO trajectories are noticeably closer to ground truth than the pure VO trajectory. The “Lalonde” method appears to have had minimal impact on this sequence due to the relatively low number of inlier sun detections. Table 1. Average Root Mean Squared Error (ARMSE) and final translational drift error on KITTI sequences +------------------------------------------------+---------+---------------------+------------+---------+ |   |   | VO + Sun Estimation | | | +:===============================================+:========+:====================+:===========+:========+ | | Pure VO | Lalonde | Lalonde-VO | Sun-CNN | +------------------------------------------------+---------+---------------------+------------+---------+ | 2011\_09\_26\_drive\_0019 (0.4 km) | | | | | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE [m] | 4.99 | 4.93 | 4.94 | 5.20 | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE (EN-plane) [m] | 5.42 | 5.52 | 5.52 | 5.49 | +------------------------------------------------+---------+---------------------+------------+---------+ | Rot. ARMSE ([$$\times 10^{-3}$$]) [axis-angle] | 1.47 | 1.61 | 1.61 | 1.90 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [m] | 13.04 | 12.83 | 12.89 | 13.88 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [%] | 3.21 | 3.16 | 3.18 | 3.42 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [m] | 11.45 | 11.74 | 11.77 | 11.74 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [%] | 2.82 | 2.89 | 2.90 | 2.89 | +------------------------------------------------+---------+---------------------+------------+---------+ | 2011\_09\_26\_drive\_0039 (0.3 km) | | | | | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE [m] | 2.51 | 2.50 | 2.48 | 2.53 | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE (in-plane) [m] | 2.53 | 2.55 | 2.54 | 2.57 | +------------------------------------------------+---------+---------------------+------------+---------+ | Rot. ARMSE ([$$\times 10^{-3}$$]) [axis-angle] | 1.08 | 1.13 | 1.14 | 0.06 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [m] | 8.14 | 8.01 | 7.98 | 8.40 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [%] | 2.74 | 2.69 | 2.68 | 2.82 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [m] | 6.69 | 6.77 | 6.65 | 7.01 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [%] | 2.26 | 2.27 | 2.23 | 2.35 | +------------------------------------------------+---------+---------------------+------------+---------+ | 2011\_09\_30\_drive\_0018 (2.2 km) | | | | | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE [m] | 4.66 | 6.68 | 5.47 | 2.67 | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE (EN-plane)[m] | 5.43 | 5.95 | 5.00 | 2.09 | +------------------------------------------------+---------+---------------------+------------+---------+ | Rot. ARMSE ([$$\times 10^{-3}$$]) [axis-angle] | 3.52 | 5.71 | 4.65 | 2.23 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [m] | 32.67 | 31.74 | 26.35 | 13.44 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [%] | 1.48 | 1.44 | 1.20 | 0.61 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [m] | 31.45 | 28.00 | 22.18 | 11.33 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [%] | 1.43 | 1.27 | 1.01 | 0.51 | +------------------------------------------------+---------+---------------------+------------+---------+ | 2011\_09\_30\_drive\_0020 (1.2 km) | | | | | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE [m] | 3.07 | 3.21 | 3.03 | 2.94 | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE (EN-plane) [m] | 3.37 | 3.51 | 3.35 | 3.34 | +------------------------------------------------+---------+---------------------+------------+---------+ | Rot. ARMSE ([$$\times 10^{-3}$$]) [axis-angle] | 2.10 | 2.42 | 2.64 | 1.69 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [m] | 7.19 | 7.47 | 6.57 | 7.23 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [%] | 0.58 | 0.61 | 0.53 | 0.59 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [m] | 6.43 | 6.00 | 6.52 | 7.23 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [%] | 0.52 | 0.49 | 0.53 | 0.58 | +------------------------------------------------+---------+---------------------+------------+---------+ | 2011\_10\_03\_drive\_0027 (3.7 km) | | | | | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE [m] | 4.10 | 13.84 | 10.63 | 4.08 | +------------------------------------------------+---------+---------------------+------------+---------+ | Trans. ARMSE (in-plane) [m] | 4.20 | 3.53 | 2.57 | 4.27 | +------------------------------------------------+---------+---------------------+------------+---------+ | Rot. ARMSE ([$$\times 10^{-3}$$]) [axis-angle] | 2.28 | 9.31 | 4.97 | 2.20 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [m] | 10.06 | 13.35 | 8.31 | 8.96 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift [%] | 0.27 | 0.36 | 0.22 | 0.24 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [m] | 8.33 | 2.53 | 4.23 | 8.30 | +------------------------------------------------+---------+---------------------+------------+---------+ | Final trans. drift (EN-plane) [%] | 0.22 | 0.07 | 0.11 | 0.22 | +------------------------------------------------+---------+---------------------+------------+---------+ [] Fig. 4. VO results for the 2.2 Km sequence 2011\_09\_30\_drive\_0018. The VO result is visibly closer to ground truth using the sun-based orientation correction. Table 1 quantifies the difference in the each result by reporting their translational and rotational ARMSE, as well as the final translational drift error, relative to ground truth. We see that including the sun-based orientation correction can yield a substantial reduction in estimation error compared to pure VO, particularly on the longer sequences, which contain several sharp turns. This is especially apparent in the case of sequence 2011\_09\_30\_drive\_0018, which enjoys a 43 % reduction in translational ARMSE (62 % in-plane), and a 59 % reduction in final translational drift error (64 % in-plane) using the Sun-CNN method [12]. We stress that this improvement is purely due to information already available in the existing image stream – no additional sensors are required. On the other hand, short straight sequences such as 2011\_09\_26\_drive\_0019 and 2011\_09\_26\_drive\_0039 do not benefit significantly from sun measurements since the accumulated orientation error in the VO estimate is already small. Overall, the “Sun-CNN” and “Lalonde-VO” methods outperform the “Lalonde” method in terms of reducing estimation error in our stereo VO pipeline. This is to be expected since the “Lalonde-VO” method incorporates additional information about the temporal consistency of the images, while Ma et al. [12] have already shown that Sun-CNN is both more accurate and more reliable than [10] on single images in the KITTI dataset. While “Sun-CNN” and “Lalonde-VO” yield the minimum estimation error in similar numbers of cases, in the cases where “Sun-CNN” performs better, it does so by a wide margin. Furthermore, Sun-CNN is faster to evaluate than the other two algorithms while simultaneously avoiding hand-crafted features and approximate models of hand-picked cues. This suggests that high level scene understanding using machine learning may be a promising tool for improving robot localization accuracy in addition to providing semantic information about the environment. 4 Conclusions and Main Experimental Insights In this work we have shown that estimation error in stereo visual odometry (VO) can be reduced by exploiting global illumination information available from the same image stream used to compute VO. The main insight here is that there is much to be gained in visual navigation by reasoning over more than just geometry. In particular, the notion of embracing illumination as a tool for localization is one that has not been widely adopted, yet is a promising direction for future research. Convolutional Neural Networks (CNNs) in particular appear to be excellent tools for extracting illumination information in a form amenable to conventional VO techniques. Future work might focus on developing these tools further to yield even greater gains in localization accuracy and robustness. References 1. Agarwal, S., Mierle, K.: Ceres solver. http://​ceres-solver.​org 2. Cheng, Y., Maimone, M.W., Matthies, L.: Visual odometry on the mars exploration rovers. IEEE Robot. Autom. Mag. 13(2), 54–62 (2006)CrossRef 3. Eisenman, A.R., Liebe, C.C., Perez, R.: Sun sensing on the mars exploration rovers. In: Aerospace Conference Proceedings, vol. 5, pp. 5-2249–5-2262. IEEE (2002) 4. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRef 5. Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. J. Field Robot. 27(5), 534–560 (2010)CrossRef 6. Furgale, P., Enright, J., Barfoot, T.: Sun sensor navigation for planetary rovers. IEEE Trans. Aerosp. Electron. Syst. 47(3), 1631–1647 (2011)CrossRef 7. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef 8. Geiger, A., Ziegler, J., Stiller, C.: StereoScan: dense 3D reconstruction in real-time. In: Proceedings of Intelligent Vehicles Symposium (IV), pp. 963–968. IEEE, June 2011 9. Kelly, J., Saripalli, S., Sukhatme, G.S.: Combined visual and inertial navigation for an unmanned aerial vehicle. In: Laugier, C., Siegwart, R. (eds.) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol. 42, pp. 255–264. Springer, Heidelberg (2008)CrossRef 10. Lalonde, J.-F., Efros, A.A., Narasimhan, S.G.: Estimating the natural illumination conditions from a single outdoor image. Int. J. Comput. Vis. 98(2), 123–145 (2011)MathSciNetCrossRef 11. Lambert, A., Furgale, P., Barfoot, T.D., Enright, J.: Field testing of visual odometry aided by a sun sensor and inclinometer. J. Field Robot. 29(3), 426–444 (2012)CrossRef 12. Ma, W.-C., Wang, S., Brubaker, M.A., Fidler, S., Urtasun, R.: Find your way by observing the sun and other semantic cues. 23 June 2016. arXiv:​1606.​07415 13. Maimone, M., Cheng, Y., Matthies, L.: Two years of visual odometry on the mars exploration rovers. J. Field Robot. 24(3), 169–186 (2007)CrossRef 14. Olson, C.F., Matthies, L.H., Schoppers, M., Maimone, M.W.: Rover navigation using stereo ego-motion. Robot. Auton. Syst. 43(4), 215–229 (2003)CrossRef 15. Perez, R., Seals, R., Michalsky, J.: All-weather model for sky luminance distribution. Solar Energy 50(3), 235–245 (1993)CrossRef Footnotes 1 Results from several state-of-the-art VO systems on the KITTI odometry benchmark can be found at http://​www.​cvlibs.​net/​datasets/​kitti/​eval\_​odometry.​php.   2 https://​github.​com/​jflalonde/​illuminationSing​leImage.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_37 Experimental Validation of a Template for Navigation of Miniature Legged Robots Konstantinos Karydis¹  , Adam Stager²  , Herbert G. Tanner²   and Ioannis Poulakakis²   (1) Department of Electrical and Computer Engineering, University of California, 900 University Ave, 357 Bourns Hall B, Riverside, California 92521, USA (2) Department of Mechanical Engineering, University of Delaware, 130 Academy Street, Room 126, Newark, Delaware 19716-3140, USA     Konstantinos Karydis (Corresponding author) Email: kkarydis@ece.ucr.edu   Adam Stager Email: astager@udel.edu   Herbert G. Tanner Email: btanner@udel.edu   Ioannis Poulakakis Email: poulakas@udel.edu Abstract This paper provides experimental evidence in support of the hypothesis that the Switching Four-bar Mechanism (SFM) model may serve as a template for miniature legged systems in quasi-static operation. The evidence suggests that the SFM captures salient motion behaviors of morphologically distinct centimeter-scale legged robots. Captured behaviors are then used for planning and control at small scales, thus demonstrating the practical utility of the SFM in navigation tasks. Keywords Miniature legged robotsBiologically-inspired robotsMultilegged robotsTemplate-based navigation 1 Introduction Miniature legged robots can involve highly-articulated transmission and actuation mechanisms. Often, they are made of materials with uncertain mechanical properties. Thus, it is particularly challenging to derive first-principle models that express the complex and inherently uncertain interactions between the system and its environment. In navigation tasks that involve quasi-static locomotion behaviors, movement is dominated by surface forces [1, 2]. Efforts for developing accurate descriptions of ground interactions suitable for inclusion into low-dimensional models amenable to planning and control are still ongoing [3]. An alternative to first-principle models are abstractions that capture only selected key features of robot behavior. Such models are known as templates [4]. Research on the motion of sprawled arthropods on the horizontal plane [5] has motivated various dynamic bio-inspired templates. For example, the Lateral Leg Spring (lls) template [6] explains lateral stabilization [7] and helps derive turning strategies [8, 9] for hexapedal runners. The Sliding Spring Leg (ssl) model [10] extends lls by incorporating the sliding effects of the leg-ground interaction. However, it is unclear how to map the parameters of such models to robot design and control parameters [11] that are critical for robot navigation. As it turns out, these challenges can be circumvented with kinematic templates [12]. To be useful for navigation tasks, kinematic templates should (i) capture salient behaviors of multiple robots, and (ii) facilitate analysis and control. The Switching Four-Bar Mechanism (sfm) model [12] was shown capable to capture the behavior of the miniature legged robot octoroach [13] when crawling at low speeds. This paper presents experimental evidence supporting the hypothesis that the sfm may serve as a template for navigation of miniature robots operating in quasi-static regimes. Specifically, the paper uses data from three morphologically distinct miniature legged robots to show that the sfm can capture various atomic motion behaviors—i.e., motion primitives—on average, by employing only a small number of physically-intuitive parameters. The work here also demonstrates the model’s practical utility in performing navigation tasks with miniature legged robots. The constructed primitives are used by a Rapidly-exploring Random Tree (rrt) planner [14] to derive platform-compatible trajectories in environments populated with obstacles. These trajectories are then realized on the robots in both open– and closed-loop navigation. Developing the domain of navigation and control for miniature legged robots is important, given the potential of such systems in real-life applications. Legs support all-terrain mobility and high-maneuverability [15, 16], while low production cost and rapid manufacturing enable deployment in large numbers. Despite the growth in the area of design and manufacturing, the area of navigation and control—with a few exceptions [17, 18]—remains under-developed. Introducing simple models, such as the Switching Four-Bar Mechanism (sfm), which can capture robot behaviors and facilitate analysis and control, may accelerate progress. 2 Technical Analysis 2.1 The SFM Model The sfm (Fig. 1(a)) is a horizontal-plane model comprising a rigid torso and four rigid legs organized in two pairs, the right [$$\{AO\_1,BO\_2\}$$] and left pair [$$\{AO\_3,BO\_4\}$$], which turn active (in contact with the ground) with a [$$50\,\%$$] duty cycle (Fig. 1(b)). The torso and legs form two alternating (but symmetrical) four-bar linkages [12]. [] Fig. 1. (a) The sfm model; G is its geometric center, d its length, and l is the leg length. (b) The model’s foot-fall pattern. The initial configuration of the legs is expressed by the leg touchdown angles. Considering the right pair for example, these are the angles [$$\phi \_1^\mathrm {td}$$] and [$$\phi \_2^\mathrm {td}$$]. The four-bar mechanism formed between the two pivot ground points [$$O\_1$$] and [$$O\_2$$] has one degree of freedom, taken here to be the angle of the hind leg of the corresponding leg pair, and thus [$$\phi \_1$$] directly determines [$$\phi \_2$$]; see Fig. 1(a). At the end of the step all angles reach their liftoff configuration (denoted [$$\phi \_1^\mathrm {lo}$$] and [$$\phi \_2^\mathrm {lo}$$]).¹ The touchdown and liftoff angles determine the evolution of the model’s state at each step [12]. The state is [$$q=(x\_G,y\_G,\theta ) \in \mathbb {R}^2\times \mathbb {S}$$], while the equations that govern the model’s state propagation are solved analytically [19, Appendix B]. Using a least-squares constrained optimization we can identify model parameter values that enable the sfm to generate outputs (that is, paths) that closely match the experimental paths of various robots, on average. 2.2 Parameter Identification Procedure Our approach focuses on identifying parameter values for specific motion primitives. A motion primitive is a time series of length T, defined as the timed average [$$w^\mathrm{ave}(t),~t \in {1,\ldots ,T}$$] of the individual paths it encapsulates (e.g., straight-line paths). The shorthand notation [$$\mathcal {M}(\xi )$$] denotes the sfm with parameters [$$\xi \in \varXi $$], while the term [$$\mathsf {out} (\mathcal {M}(\xi ))\_t$$] denotes a model-generated path. Note that a model path is essentially a time series that contains the evolution of the state of the model; the subscript t highlights this fact. Then, the (nominal) parameter values [$$\bar{\xi } \in \varXi $$] that result in model paths which best capture the experimental averages are identified through the least-squares optimization [$$\begin{aligned} \bar{\xi } = \mathop {{{\mathrm{arg\,min}}}}\limits \_{\xi \in \varXi } \sum \_{t=1}^{T}\Vert \mathsf {out} (\mathcal {M}(\xi ))\_t - w^\mathrm{ave}(t)\Vert ^2. \end{aligned}$$] (1) 2.3 Application to Miniature Legged Robots To validate the hypothesis that the SFM can be interpreted as a template for planning and control purposes, we apply the above parameter identification procedure to the three different miniature legged robots shown in Fig. 2. All robots feature two motors; each motor controls all legs of one side giving rise to a differential-drive steering method. Two motor gains, one for the left side ([$$K\_L$$]), and one for the right side ([$$K\_R$$]) determine the robots’ leg angular velocities. Due to the differential-drive steering method that these robots employ, changing the motor gains results in either straight-line or curved paths, and specifically (i) straight-line paths (sl) when [$$K\_L = K\_R$$], (ii) clockwise turns (cw) when [$$K\_L > K\_R$$], and (iii) counter-clockwise turns (ccw) when [$$K\_L < K\_R$$]. [] Fig. 2. The robots studied here. (a) The octoroach [13], designed at University of California, Berkeley. (b) A revamped octoroach designed at the University of Delaware, and (c) stagbot also designed at the University of Delaware. The primitives are constructed by collecting open-loop state measurements through a vicon motion capture system (8 cameras for an approximately [$${5\times 5\times 2}$$] m working volume) at a rate of 30 Hz. The measured states express the position of the geometric center of the robot [$$(x\_G,y\_G) \in \mathbb {R}^2$$], and its heading [$$\theta \in \mathbb {S}$$] (see Fig. 1(a)). At the beginning of each individual experimental trial, the robots are placed at a designated start area with initial state [$$(x\_G,y\_G,\theta )= (0,0,0)$$] [cm, cm, deg]. All trials are conducted on a rubber floor mat and last² 3 s; 100 trials are collected for each motion primitive. Figure 3 presents the collected paths: top plots correspond to octoroach, middle plots to the revamped octoroach, and bottom plots to stagbot. Continuous thick curves mark experimental averages and are overlaid on the body of the collected paths shown with thin curves. For both octoroach robots, the motor gains that realize the primitives at hand are [$$(K\_L,K\_R)$$] = (40, 40), (60, 20), and (20, 60) for the sl, cw, and ccw primitives, respectively. Similarly, the motor gains associated with stagbot are [$$(K\_L,K\_R) = (100,100)$$], (150, 50), and (50, 150). Certain conventions are made before solving (1). First, the quantities d and l are chosen so to match the robots’ actual length and half-width, respectively. Thus, [$$d=13$$] cm, [$$l=3$$] cm for both octoroach robots, while for stagbot [$$d=14$$] cm and [$$l=7.5$$] cm. An additional case ([$$d=13$$] cm, [$$l=3$$] cm) is also considered for stagbot. The latter is used as a means to test the robustness of the model when (some) parameter values may vary. The number of model steps for each primitive is set to³ [$$N = 10$$]. Straight-line motion is generated by activating both left and right pairs of legs [12], with the same touchdown and liftoff configurations, that is [$$\phi \_1^{\text {td}} = \phi \_2^{\text {td}} = \phi \_3^{\text {td}} = \phi \_4^{\text {td}} = \bar{\phi }^{\text {td}}$$] and [$$\phi \_1^{\text {lo}} = \phi \_2^{\text {lo}} = \phi \_3^{\text {lo}} = \phi \_4^{\text {lo}} = \bar{\phi }^{\text {lo}}$$]. Clockwise turns are generated as a variation of the above where only the left pair is active, i.e., [$$\phi \_1^{\text {td}} = \phi \_2^{\text {td}} = \phi \_1^{\text {lo}} = \phi \_2^{\text {lo}} =0$$] throughout the stride. Similarly, counter-clockwise turns are produced by activating only the right pair: [$$\phi \_3^{\text {td}} = \phi \_4^{\text {td}} = \phi \_3^{\text {lo}} = \phi \_4^{\text {lo}} = 0$$]. To capture the very rapid change of heading for the two octoroach robots at the beginning of their paths (see Fig. 3(a)-(f)) we included the initial heading of the model, [$$\theta ^\mathrm {init}$$], as an additional parameter. [] Fig. 3. Our experimental data. Thin curves denote individual paths, while thick curves indicate experimental averages. The sfm paths that best match these experimental averages are shown with lightly-shaded dashed curves. (a)-(c) ccw, sl, and cw primitives for the octoroach. Similarly (d)-(f) for the revamped octoroach, and (g)-(i) stagbot. With these conventions in place, the parameters to be identified are [$$\begin{aligned} \xi = \left[ \bar{\phi }^{\text {td}}\bar{\phi }^{\text {lo}}\theta ^\mathrm {init} \right] \in \varXi , \end{aligned}$$] (2) and the selection is made by solving (1) for each of the nine cases shown in Fig. 3. Table 1 contains the nominal model parameters. Null entries in the third column of the table indicate that the initial orientation was not included in the parameter identification. Nominal sfm paths are shown in Fig. 3 as lightly-shaded thick dashed curves. Both cases for stagbot are very close, so the output of the first case ([$$d=14$$] cm and [$$l=7.5$$] cm) is shown only. Table 1. Motion primitives, identified SFM parameter values, and errors in fit +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | Platform | Primitive type | Model parameters [$$\left\{ \bar{\phi }^\mathrm {td},\bar{\phi }^\mathrm {lo},\theta ^\mathrm {init}\right\} $$] [deg] | Error in fit ([$$\epsilon \_x$$] [cm], [$$\epsilon \_y$$] [cm], [$$\epsilon \_\theta $$] [deg]) | +:===================+:===============+:=======================================================================================================================+:=============================================================================================+ | OctoRoACH | SL | [$$\{65.57, 27.31,0.00\}$$] | (0.14, 0.57, 7.38) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | | CW | [$$\{38.78, 15.70, -15.00\}$$] | (0.11, 0.21, 17.20) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | | CCW | [$$\{40.40, 11.65, 15.00\}$$] | (0.35, 0.37, 19.55) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | Revamped OctoRoACH | SL | [$$\{0.06, -39.81, 0.00\}$$] | (0.08, 0.64, 2.42) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | | CW | [$$\{23.96, 4.93, -15.00\}$$] | (0.26, 0.47, 7.26) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | | CCW | [$$\{28.60, 11.30, 15.00\}$$] | (0.21, 0.51, 7.91) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | STAGBOT | SL | [$$\{8.39, -12.97, 0.00\}$$] | (0.15, 0.92, 1.87) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | [$$d=14$$] cm | CW | [$$\{38.95, -3.87, 0.00\}$$] | (0.15, 0.66, 1.92) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | [$$l=7.5$$] cm | CCW | [$$\{26.91, 12.95, 0.00\}$$] | (0.15, 0.75, 1.26) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | STAGBOT | SL | [$$\{25.06, -28.65, 0.00\}$$] | (0.15, 0.91, 1.89) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | [$$d=13$$] cm | CW | [$$\{25.98, 12.93, 0.00\}$$] | (0.14, 0.69, 1.23) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ | [$$l=3$$] cm | CCW | [$$\{39.88, -4.28, 0.00\}$$] | (0.12, 0.59, 1.96) | +--------------------+----------------+------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------+ The last column of Table 1 provides an error measure for the quality of fit, [$$\begin{aligned} \left( \epsilon \_x, \epsilon \_y, \epsilon \_\theta \right) = \frac{1}{T}\sum \_{t=1}^{T}| \mathsf {out} (\mathcal {M}(\xi ))\_t - w^\mathrm{ave}(t)|. \end{aligned}$$] (3) Position errors are small in all cases. Heading errors, however, seem to depend on the platform. The octoroach robots demonstrate more variability in their motion, which is observed in experiments through abrupt changes in heading during a primitive. These discrepancies are caught by the error measure in (3). The errors shown in Table 1 are in agreement with the visual observations made based on Fig. 3, i.e., the original octoroach behavior is more variable than the revamped octoroach, while stagbot has more smooth and less uncertain behavior. Overall, the model captures the average experimental data in all cases. This is achieved by utilizing primarily two physically-relevant parameters: the touchdown and liftoff angles. The model’s initial orientation is added as a parameter only for the turning primitives of the octoroach robots. The sfm also captures the behavior of stagbot equally well in both cases; this provides preliminary evidence that the sfm is robust as a model when some parameters may be perturbed. Taken together with the fact that the reported data come from morphologically distinct robots that operate quasi-statically at different speeds, the aforementioned results indicate that the Switching Four-bar Mechanism can serve as a template for miniature legged robots in quasi-static operation regimes. 3 Template-Based Navigation The above motion primitives can be used for template-based trajectory planning, and open– and closed-loop navigation in environments populated with obstacles. Apart from facilitating motion planning, the primitives also link template parameter values to motor gains of physical platforms. This way, the sfm template is also useful for motion planning and navigation for miniature legged robots. We show this by using the original octoroach and stagbot robots. 3.1 Trajectory Planning An rrt planner [14, Sections 5.5 and 14] employs the template-generated motion primitives (see Table 1) to create a collision-free reference path between an initial and a desired final state. Obstacle boundaries are enlarged to account for the physical robots’ volume (enlarged regions shown in gray shading on Fig. 4). A generated branch is discarded if it intersects with an obstacle region, and only the end of each primitive is used to define a new vertex. Since each primitive lasts 3 s with uniform nominal speed, it is straightforward to express a desired path in the form of a trajectory. The sfm integrates well with an rrt planner [20]. Figure 4 shows reference trajectories from the initial state [$$q\_0=(20,20,-90)$$] [cm, cm, deg], to the desired [$$q\_d=(100,100,0)$$] [cm, cm, deg]. Reaching exactly [$$q\_d$$] is unlikely due to the discretization induced by the motion primitives and planner; thus, acceptable plans are allowed to end within a radius of 15 cm around (100, 100) and with a final orientation in the range [$$[-45^\circ ,45^\circ ]$$]. Thin curves mark the tree branches while a thick curve highlights the generated trajectory, using octoroach (Fig. 4(a)), and stagbot (Fig. 4(b)) primitives. The same obstacle environment is used in the experimental study (cf. Fig. 4(c)). [] Fig. 4. (a) Application using octoroach primitives. (b) Application using stagbot primitives (with [$$d=14$$] cm and [$$l=7.5$$] cm). (c) Physical realization of the environment. [] Fig. 5. Experimental results of template-based navigation for miniature legged robots. Desired trajectories are overlaid (cf. Fig. 4). (a) octoroach, and (b) stagbot open-loop execution has limited success because of the uncertainty that affects robot motion [22]. (c), (d) Closed-loop response for octoroach and stagbot, respectively. The controller improves success rates, and enables the robots to follow desired trajectories for longer. 3.2 Experimental Results The desired trajectories of Fig. 4 are first realized in open loop by replaying the motion primitives; the results are shown in Fig. 5(a)-(b) for a total of 15 trials. octoroach has [$$13.3\,\%$$] success in reaching its target (Fig. 5(a)), while stagbot has an increased success rate of [$$26.7\,\%$$] (Fig. 5(b)). Next, a step-by-step closed-loop controller, derived based on the SFM’s closed-form state propagation expressions [19], is applied. Given the current and desired (by the reference trajectory) state of the robot, the controller uses the model to predict the state of the robot at the next step. This information is then used to actively switch the type of primitive the robot will be executing at the next step so that the tracking error is minimized. The control commands (motor gains) are sent to the robot at 3.33 Hz (recall a primitive lasts 3 s and consists of 10 model steps). Despite its simplicity, this switching controller improves the success rates for both robots; the closed-loop responses are shown in Fig. 5(c)-(d) for a total of 15 experiments. Specifically, application of the controller increases goal attainment rates to [$$53.3\,\%$$] and [$$86.7\,\%$$], for octoroach (Fig. 5(c)) and stagbot (Fig. 5(d)), respectively. Success rates are lower for octoroach since it demonstrates more variability in its motion (cf. Fig. 3). To improve the success rates, ongoing work focuses on constructing a real-time trajectory-tracking controller based on the structure of the sfm, possibly closing a second, higher-level control loop in the spirit of model predictive navigation [21]. 4 Conclusions Experimental evidence from morphologically distinct robots suggests that the Switching Four-bar Mechanism (sfm) can serve as a template for miniature legged robots when performing navigation tasks in quasi-static operation regimes. Robot motion capabilities are encoded in the form of motion primitives, and a constrained optimization scheme links robot control inputs to template parameters realizing these primitives. The sfm integrates well with planners such as rrt, enabling motion planning at the miniature scale. Desired primitives-based trajectories are evaluated experimentally both in open and closed loop, demonstrating the efficacy of the sfm in template-based navigation. The simple nature of the model facilitates control by enabling real-time optimization. Developing the domain of navigation and control for miniature legged robots is important, given the potential of these robots in a variety of interesting real-world applications such as building inspection, search and rescue, and Intelligence, Surveillance, and Reconnaissance (isr). Critical to successfully addressing the challenges in navigation and control at small scales is the availability of simple models that can both capture the behavior of the robotic platforms, and facilitate analysis and control. This work shows that the Switching Four-Bar Mechanism (sfm) template can be a useful tool along this direction. Acknowledgments This work is supported in part by NSF under grant IIS-1350721, and by ARL MAST CTA [$$\#$$] W911NF-08-2-0004. References 1. Spence, A., Revzen, S., Seipel, J.E., Mullens, C., Full, R.J.: Insects running on elastic surfaces. J. Exp. Biol. 213, 1907–1920 (2010)CrossRef 2. Qian, F., Zhang, T., Li, C., Masarati, P., Hoover, A., Birkmeyer, P., Pullin, A., Fearing, R.S., Goldman, D.I.: Walking and running on yielding and fluidizing ground. In: Robotics: Science and Systems (2012) 3. Aguilar, J., Zhang, T., Qian, F., Kingsbury, M., McInroe, B., Mazouchova, N., Li, C., Maladen, R., Gong, C., Travers, M., Hatton, R., Choset, H., Umbanhowar, P., Goldman, D.: A review on locomotion robophysics: the study of movement at the intersection of robotics, soft matter and dynamical systems. Rep. Prog. Phys. 79(11), 35pp (2016). 110001 4. Full, R., Koditschek, D.: Templates and anchors: neuromechanical hypotheses of legged locomotion on land. J. Exp. Biol. 202, 3325–3332 (1999) 5. Blickhan, R., Full, R.J.: Similarity in multilegged locomotion: bouncing like a monopode. J. Comp. Physiol. Neuroethol. Sens. Neural Behav. Physiol. 173, 509–517 (1993) 6. Holmes, P., Full, R.J., Koditschek, D.E., Guckenheimer, J.: The dynamics of legged locomotion: models, analyses, and challenges. SIAM Review 48(2), 207–304 (2006)MathSciNetCrossRefMATH 7. Seipel, J.E., Holmes, P.J., Full, R.J.: Dynamics and stability of insect locomotion: a hexapedal model for horizontal plane motions. Biol. Cybern. 91(2), 76–90 (2004)CrossRefMATH 8. Jindrich, D., Full, R.: Many-legged maneuverability: dynamics of turning in hexapods. J. Exp. Biol. 202, 1603–1623 (1999) 9. Proctor, J., Holmes, P.: Steering by transient destabilization in piecewise-holonomic models of legged locomotion. Regul. Chaotic Dyn. 13(4), 267–282 (2008)MathSciNetCrossRefMATH 10. Zarrouk, D., Fearing, R.S.: Controlled in-plane locomotion of a hexapod using a single actuator. IEEE Trans. Robot. 31(1), 157–167 (2015)CrossRef 11. Hoover, A.M., Burden, S., Fu, X.-Y., Sastry, S., Fearing, R.S.: Bio-inspired design and dynamic maneuverability of a minimally actuated six-legged robot. In: IEEE International Conference on Biomedical Robotics and Biomechatronics, pp. 869–876 (2010) 12. Karydis, K., Liu, Y., Poulakakis, I., Tanner, H.G.: A template candidate for miniature legged robots in quasi-static motion. Auton. Robots 38(2), 193–209 (2015)CrossRef 13. Pullin, A., Kohut, N., Zarrouk, D., Fearing, R.S.: Dynamic turning of 13 cm robot comparing tail and differential drive. In: IEEE International Conference on Robotics and Automation, pp. 5086–5093 (2012) 14. LaValle, S.M.: Planning Algorithms. Cambridge University Press, Cambridge (2006)CrossRefMATH 15. Mongeau, J.-M., McRae, B., Jusufi, A., Birkmeyer, P., Hoover, A.M., Fearing, R.S., Full, R.J.: Rapid inversion: running animals and robots swing like a pendulum under ledges. PLoS ONE 7(6), e38003 (2012)CrossRef 16. Li, C., Pullin, A.O., Haldane, D.W., Lam, H.K., Fearing, R.S., Full, R.J.: Terradynamically streamlined shapes in animals and robots enhance traversability through densely cluttered terrain. Bioinspir. Biomim. 10(4), 046003 (2015)CrossRef 17. Mathis, A., Russell, J., Moore, T., Cohen, J., Satterfield, B., Kohut, N., Fu, X., Fearing, R.S.: Autonomous navigation of a 5 gram crawling millirobot in a complex environment. In: Adaptive Mobile Robotics: 15th International Conference on Climbing & Walking Robots & the Support Technologies for Mobile Machines, pp. 121–128 (2012) 18. Karydis, K., Zarrouk, D., Poulakakis, I., Fearing, R.S., Tanner, H.G.: Planning with the STAR(s). In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3033–3038 (2014) 19. Karydis, K.: “A data-driven hierarchical framework for planning, navigation, and control of uncertain systems: Applications to miniature legged robots,” Ph.D. dissertation, University of Delaware (2015) 20. Karydis, K., Liu, Y., Poulakakis, I., Tanner, H.G.: Navigation of miniature legged robots using a new template. In: 23rd Mediterranean Conference on Control and Automation, pp. 1112–1117 (2015) 21. Karydis, K., Valbuena, L., Tanner, H.G.: Model predictive navigation for position and orientation control of nonholonomic vehicles. In: IEEE International Conference on Robotics and Automation, pp. 3206–3211 (2012) 22. Karydis, K., Poulakakis, I., Sun, J., Tanner, H.G.: Probabilistically valid stochastic extensions of deterministic models for systems with uncertainty. Int. J. Robot. Res. 34(10), 1278–1295 (2015)CrossRef Footnotes 1 Due to symmetry, the above description holds for the left pair by replacing indices [$$\text {1}$$] and [$$\text {2}$$] with indices 3 and 4, respectively.   2 In fact, a duration of 3 s turns out to be a good trade-off between path dispersion, and length which affects the overall computationally complexity; see [12].   3 The number of steps has been chosen empirically to provide adequate resolution for the touchdown and liftoff configurations in 3 sec-long experimental data.   Perception © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_38 Fruit Pose Estimation and Stem Touch Detection for Green Pepper Automatic Harvesting Peteris Eizentals¹  , Koichi Oka² and Akinori Harada² (1) Kochi University of Japan, Kochi, Japan (2) Department of Intelligent Mechanical Systems Engineering, Kochi University of Technology, Kochi, Japan     Peteris Eizentals Email: peteris.eizentals@gmail.com 1 Motivation Automatic harvesting consists of two main sub-steps: target recognition and picking/detachment of recognized targets. Target fruit recognition is a machine vision task that has been the subject of much research ever since the automatic harvesting was first introduced in the early 1960s [1]. The methods used for recognition largely depend on the properties of the fruit being harvested. Fruits, such as strawberries and tomatoes, can be relatively easily detected by a simple RGB color segmenting as the color of a ripe fruit differs significantly from both unripe fruits and the surrounding foliage, while fruits, such as green apples and green peppers, might require spectral analysis to distinguish them from the surrounding foliage [2]. In all cases, however, the recognition is greatly complicated by issues such as changing illumination conditions, shadows, occlusions of fruits by surrounding leaves and other fruits, color and shape variations and reflectance. Grasping and detachment of the recognized fruits is a separate topic of study. Much research has been done to develop harvesting grippers and end-effectors for automatic harvesting robots. The methods used differ depending on the size and shape of the harvested fruits. The main actuation techniques used are electric, pneumatic and, less frequently, hydraulic grippers [3]. Recognition of a target doesn’t necessarily guarantee a successful detachment of the fruit. Irregular fruit shape, size variations, and occluded stem position are only a few of the major causes of unsuccessful detachment of a recognized fruit. It has been noted in the literature that pose estimation in space could provide the necessary information for a proper grasping of the detected fruit [2]. Knowledge of the pose of a fruit in space can be used to calculate the optimal approach to gripping and manipulation of the fruit when necessary. Moreover, the fruit pose can give the valuable information about the location of the stem for detachment. Japanese green pepper often grows slantwise due to a thick stem. The slantwise growth together with similarity in color between fruits and the surrounding foliage makes it hard to accurately detect the position of the stem and therefore complicates automatic harvesting. As a result, the visual information can’t be used for the stem position calculation. This paper describes a novel approach to green pepper automatic harvesting, in which the pose of a fruit in space is first computed for each fruit to calculate the stem position, and afterward, the stem is located by searching in the calculated position with a piezo-based touch sensor, mounted on the harvesting end-effector. The described method was developed for the end-effector, which consists of a cutter-pincer system and performs cutting and gripping with a single movement. As a result, only the stem position is required for harvesting. The performance of both the stem position calculation and piezo touch sensor was tested separately and the results are presented. Up to our best knowledge, this is the first research to successfully implement fruit pose estimation in automatic harvesting and to investigate touch sensing for the stem position detection. 2 Technical Approach 2.1 Pose Estimation The proposed algorithm consists of two major steps: stem position calculation by pose estimation and stem position detection with a touch sensor. The pose estimation is obtained by matching a model to surface points of the target fruit. Detection of the targets is performed by a CMOS USB camera and surface points are acquired by a LIDAR type laser range finder (URG-04LX, Hokuyo Automatic Co. Ltd.). Both camera and laser are mounted on a vertical slider with a moving range of around 1000 mm (Fig. 1a). The position of a stem is calculated as follows. First, the slider is moved down while a picture is taken once every 10 mm of the slider displacement. The fruit detection algorithm is applied on each image to find any fruit in the current field of sight. Fruits that had a mass center in the acquired image within a certain distance from the horizontal optical axis of the camera are registered in the initial harvesting target list. In this way, information about the fruit height with respect to the harvesting robot is acquired. After visual detection in the entire length of the vertical slider is finished, laser data is measured in ±50 mm around the measured center point of the detected targets with a 2 mm step in the vertical direction. The algorithm can deal with multiple detections and, to decrease the time consumption of the laser measurement acquisition step, only one measurement is performed in the case of overlapping heights for two or more fruits. [] Fig. 1. (a) Vertical slider setup and (b) the laboratory test setup The acquired laser measurement for each fruit is analyzed separately by first removing the points outside the harvesting area, which depends on the reaching area of the used manipulator. Point removal greatly decreases the size of the point cloud and therefore lowering the time consumption of the following steps. The remaining points are forward-projected on the image by using a function in the Camera Calibration Toolbox for Matlab [4]. Forward-projection allows determining which points of the laser measurement belong to which objects in the acquired image. All points outside the detected fruit segment in the image are removed. Next, the DBSCAN segmenting [5] is used to remove noise and resolve cases when two or more adjacent fruits are recognized as a single fruit. The remaining segments are used for the model matching step, where Coherent Point Drift (CPD) algorithm [7] is used to find the optimal translation and rotation with which a predefined model fits the fruit surface points. Finally, the translation and rotation, found by CPD algorithm, are used to calculate the stem position with respect to the harvesting robot. 2.2 Piezo Touch Sensor The result from the pose estimation algorithm is a calculation and not detection. As a result, means are necessary to verify that the stem is in the cutting position. For this reason, a touch sensor was designed to be mounted on the end-effector of the harvester. The developed sensor consists of two adjacent piezo stack actuators (3 × 3 × 5 mm, Nihon Ceratec Co., Ltd.) and a sensing tip, mounted on top of them (Fig. 2). One of the piezo stacks is called the “driving” actuator and is actuated with a sine wave in one of the resonant frequencies of the system, while the other, the “reading” actuator is generating a charge due to the motion caused by the first actuator. The generated charge is measured by an Arduino Due microcontroller with a speed of around 1MSPS. A change of the resonant frequency of the sensor system is induced in case of a contact between the sensing tip and an object of a sufficient mass. Consequently, the driving frequency of the sensor is not the resonant anymore, causing the generated charge to drop significantly. To increase the sensitivity of the sensor, trapezoidal numerical integration was performed on each consecutive 512 measurements. The resulting value is directly proportional to the generated charge and is very sensitive to the change of the amplitude of the generated sinusoidal signal from the reading actuator. The sensor was positioned so that the “V” shape of the end-effector scissors would guide the stem of a fruit towards the sensor, and the sensor would detect when the stem is in cutting position. [] Fig. 2. End-effector with the piezo sensor installed 2.3 Harvesting Robot Design A vertical slider is an important part of the pose estimation hardware as it allows acquiring laser range reading in slices. Mounting such slider on a conventional type harvesting robot could decrease the stability of the robot, limit the movement of the manipulator and increase the overall size of the harvester. A monorail type automatic harvesting robot was designed and is under development to implement the vertical slider and therefore also the pose estimation algorithm [8]. The new design consists of the same basic main parts as a conventional robot but in a different configuration (Fig. 3). The movement system consists of a monorail (120 mm × 40 mm) that is positioned above the pathway between plant beds, and components for the guidance of the robot along the monorail. The motion along the railway is achieved by a roller, which is positioned on the top of the robot and actuated by DC motor (Maxon, 148867) with a gearhead that gives additional torque to the motor and prevents any movement of the robot by inertia. The lateral position along the monorail is achieved by two guiding rods on each end of the robot that slides in the groove in the middle of the monorail. All electronics and controllers for actuators are contained in the main robot box under the monorail. The monorail is positioned so that the main box slides above the top level of plants and only the picking and recognition systems penetrate the foliage. The recognition system consists of a camera, laser range finder and an LED array that are attached to a linear slider. The picking system is planned to consist of a four segment continuum manipulator, which is under development [8], and a scissors-pincer type end-effector. [] Fig. 3. Monorail harvesting robot, (a) CAD model and (b) prototype in the assembly stage The central processing unit of the robot is a low power consumption computer (Gigabyte Brix GB-BXBT-2807), while two additional Arduino UNO boards and one Arduino DUE board are used for motor control and sensor data acquisition respectively. The first Arduino UNO board is equipped with an Arduino Motor Shield R3 and an Adafruit Motor Shield v2, and its main tasks are the control of the linear slider movement and of the monorail DC motor. The main function of the second Arduino UNO board is control of the manipulator, but the Arduino DUE board is used for fast data acquisition from the piezo sensor. The system is powered using four 12 V 15000 mAh Li-ion battery packs, which are charged by a non-contact inductive power supply to eliminate the necessity for an operator to plug in the robot for charging and achieve a fully autonomous operation. 3 Experimental Results 3.1 Pose Estimation Stem position calculation by pose estimation and the developed sensor were tested and the most important results are presented. Two tests were performed to assess the accuracy of the pose estimation algorithm. During the first test under laboratory conditions, a green pepper fruit was attached to a test rig (Fig. 1b) and positioned in a set rotation in the range from −45° to +45° with a 15° step along the X and Z axes. The pose estimation algorithm was then applied as described in the previous section, and fruit orientation in space, as well as the cutting point position, was calculated. The cutting point in this study was assumed to be 10 mm above the fruit. The calculated position of the stem was compared to the set position to calculate the error. In total, 10 fruits were used for this test with 49 different angle setups for each fruit. According to the test results, in 38% of cases, the error between the set and calculated positions was less than 15 mm. In 58.4% of cases, it was less than 20 mm and in 77.6%, less than 25 mm. The main source of the total error was found to be the distance calculated to the cutting point, as the laser used in this study has an accuracy of ±10 mm. Also, a considerable increase of the error was observed for inclination angles of ±45°. For inclination angles 0°, ±15° and ±30°, the average total error was 11.1 mm, 11.2 mm, and 15.1 mm, while for angles of ±45°, the same error was 23.2 mm (Fig. 4). [] Fig. 4. Results of the stem position calculation test Field testing was performed on 107 randomly selected fruits in a greenhouse. An accurate measurement of the actual stem position in a greenhouse is complicated, thus the result was evaluated by forward projection of the found cutting point and the central axis of a fruit on the image, which was used for target recognition (Fig. 5a). The result was evaluated by a score depending on the distance between the projected cutting point and the actual stem position with 1 being the best and 0 the being the worst (Fig. 5b). The mean score was found to be 0.82 with 51 of 107 fruits receiving a score of 1, 30 receiving a score of 0.75, 18 receiving a score of 0.5, 4 received 0.25 and 4 received a zero score. The total success rate was from 75.7–92.5% depending on the on-border results that received 0.5 points. [] Fig. 5. (a) Example of a greenhouse test result and (b) the scoring template 3.2 Piezo Touch Sensor The piezo touch sensor was tested to evaluate frequency response, touch sensitivity, movement sensitivity and performance in a greenhouse. The frequency response test revealed several resonant frequencies within the 1 kHz–150 kHz range, from which two most prominent peaks were at 49.4 and 71.0 kHz (Fig. 6). In-depth examination revealed that the measurement at the resonant frequency of 71.0 kHz is significantly more stable than the one at 49.4 kHz and therefore it was chosen as the driving frequency. [] Fig. 6. Frequency response of the developed piezo touch sensor To measure the sensitivity of the sensor, a pin was pushed to certain parts of the sensor (Fig. 7) with a known force, which was measured by tension gauge. This test revealed that the sensor can detect forces directed to the wings of the sensing tip starting from 0.09N (Fig. 8). The result is considered to be satisfactory as the average force to move a green pepper by the stem, which was measured in the greenhouse, was approx. 0.18–0.3N depending on the pushing direction with respect to the stem and the main branch. [] Fig. 7. FEM analysis result for 70 kHz frequency and points used for the sensitivity test [] Fig. 8. The sensitivity test result, error bars indicate standard deviation for 10 consecutive tests A movement test was performed to identify whether the measurement is affected by the manipulator movement by performing random manipulator movements while measuring the sensor output. No significant change in the output of the piezo sensor was observed during the movement test. The standard deviation of the sensor output increased from approx. 1.3% of the total output value to 1.6%. During the field test, 22 green pepper fruits were randomly selected. The manipulator was moved manually to cause a contact between the sensor and a stem while the sensor output was recorded (Fig. 9). The change of the sensor output after a contact with a stem was from 11.18% to 73.15% with an average change of 31.37%. [] Fig. 9. Example of a measurement obtained during the field test 4 Main Experimental Insight The results of the performed laboratory test for the pose estimation and stem position calculation algorithm indicated a good performance for cases when the inclination angle of the fruit was up to 30°. The average error of measurements for inclinations up to 30° along the x, y, z-axes and the total error was 6.9 mm, 5.2 mm, 7.8 mm, and 13.7 mm respectively. A significant deterioration of the accuracy of stem position calculation was observed at an inclination angle of 45°. The average error along x, y, z-axes and the total error of the calculated stem position in such cases was 15.4 mm, 6.9 mm, 11.4 mm, and 23.2 mm respectively. This result demonstrated that the proposed algorithm can be safely used for fruit pose and stem position calculation for cases with an inclination angle up to 30°. As mentioned before, one of the main sources of the error was the laser range finder measurement error. The accuracy of the used sensor is ±10 mm which, in some cases, would lead to a situation when a whole slice had significantly different distance measurement than adjacent slices. As a result, the DBSCAN segmentation algorithm discarded that slice as noise and divided the surface points into two segments due to the now existing gap. According to the algorithm, the system now would register two smaller fruits instead of one. One of the possible solutions to this problem would be fine-tuning of the DBSCAN parameters, ε, and minPts, where ε is the radius for point cloud segmentation and minPts is a minimum number of points necessary in the radius ε to be considered a segment. Care must be taken when choosing values for these parameters as setting them too high makes the segmentation insensitive to noise, whereas too low values make the segmentation too sensitive, which leads to many small segments. Another potential source of error is the shape of the model used for model matching during pose estimation. The shape used in this study was half of a truncated cone surface, which was considered to be the simplest approximation of the surface of a green pepper. The CPD algorithm applied an affine transformation to the used model to deal with the change of size and shape irregularities. The results from both laboratory and greenhouse tests showed that, although the affine transformation worked well to compensate for shape and size changes, on a few occasions, it would introduce error by choosing to change the shape instead of adjusting the angle. The results of performed tests suggest good stability and sensitivity of the developed sensor. The standard deviation of the sensor output at the used resonant frequency of 71 kHz was only 1.3% of the total value while the average change of the output value after a contact with a green pepper stem was measured to be approx. 30% of the total value. The field test for piezo sensor also suggested a good suitability of the chosen sensor tip shape for the given task. Sufficient sensitivity was confirmed during the test, as long as good contact between the sensor and the stem of a fruit was provided. The shape of the sensor worked well with moving the stem to sensitive areas of the sensor tip. 5 Conclusions A novel method of green pepper automatic harvesting has been developed and tested. In this method, the position of the stem is calculated from the information of the pose of a fruit in space and verified by a piezo-based touch sensor. The pose of a fruit in space is calculated by a model matching algorithm, which uses predefined fruit model and fruit surface point position information, obtained by a laser range finder. The piezo touch sensor is designed especially for the purpose of detecting a contact with a green pepper stem while ignoring a contact with leaves. Both stem position calculation and touch sensor were tested to evaluate the performance and suitability of the sensor for the given task. In the experiment under laboratory conditions, the pose estimation algorithm was capable of calculating the position of the stem with an acceptable accuracy for fruits with inclination angle up to 30°. The performance of the method was verified under greenhouse conditions, where the success rate was in the range from 75.7–92.5%. Several different parameters were tested for the developed piezo touch sensor to evaluate their performance. Tests revealed a good sensitivity and stability of the sensor. Testing in the greenhouse assured that the sensor performs well under real working conditions. Future work includes developing a control algorithm for the harvesting robot, based on the developed methods. An in-depth field study will be performed to reveal possible issues and optimize the developed algorithms. References 1. Li, P., Lee, S., Hsu, H.Y.: Review on fruit harvesting method for potential use of automatic fruit harvesting systems. Procedia Eng. 23, 351–366 (2011)CrossRef 2. Kapach, K., Barnea, E., Mairon, R., Edan, Y., Ben-Shahar, O.: Computer vision for fruit harvesting robots – state of the art and challenges ahead. Int. J. Comput. Vis. Robot. 3(1/2), 4–34 (2012)CrossRef 3. Blanes, C., Mellado, M., Ortiz, C., Valera, A.: Review technologies for robot grippers in pick and place operations for fresh fruits and vegetables. Span. J. Agric. Res. 9(4), 1130–1141 (2011)CrossRef 4. Bouguet, J.: Camera Calibration Toolbox For Matlab (1999). http://​www.​vision.​caltech.​edu/​bouguetj/​index.​html. Accessed 22 Mar 2016 5. Tran, T.N., Drab, K., Daszykowski, M.: Revised DBSCAN algorithm to cluster data with dense adjacent clusters. Chemometr. Intell. Lab. Syst. 120, 92–96 (2013)CrossRef 6. Myronenko, A., Song, X.: Point set registration: coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 32(12), 2262–2275 (2010)CrossRef 7. Eizentals, P., Tokunaga, T., Oka K.: Design of a monorail type green pepper automatic harvesting robot. In: Proceedings of the Robotics and Mechatronics Conference, Yokohama, Japan, 8–11 June 2016 8. Tokunaga, T., Eizentals, P., Oka, K.: [] [] (only in Japanese). In: Proceedings of the Robotics and Mechatronics Conference, Yokohama, Japan, 8–11 June 2016 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_39 From Localized Shearing to Localized Slippage Perception Van Anh Ho¹   and Shinichi Hirai² (1) Department of Mechanical and System Engineering, Ryukoku University, Kyoto, Japan (2) Department of Robotics, Ritsumeikan University, Kyoto, Japan     Van Anh Ho Email: van-ho@rins.ryukoku.ac.jp URL: http://mec3342.mecsys.ryukoku.ac.jp/ho/ Abstract We have proposed a novel haptic display that can generate localized shearing pattern on human fingertip, resulting in the partial slip perception. The device comprises of a bundle of stiff pins resting on a specially designed concave base whose planar movement is controlled by a two-DOF (degree of freedom) linear stage. These pins’ free ends, when making contact with human fingertip surface, can displace horizontally and partially. The novel characteristic of this design is that the pattern of localized slippage could be generated by altering the geometric shape of the supporting base and the number of haptic pins. We introduced a dynamic model for investigation of mechanical response of stress or strain on human fingertip under operation of the proposed haptic device. By variation of the device’s design, it is possible to study the sense of partial slippage on human fingertip under experimental conditions such as applied force, sliding velocity, direction on slip perception of volunteering subjects, using VAS (visual analog scale). The results presented in this paper may help assess human slip perception for the development of haptic display. Keywords Haptic interfaceMulti-pin designSlip perceptionLocalized displacement phenomenonTactile feedback [] Fig. 1. Slip haptic display that can generate localized displacement phenomenon on fingertip, for enhancement of slip perception of human (a) Working principle: a bundle of stiff haptic pins are constrained on a concave supporting base, and displaced horizontally on human fingertip. Displacement of each pin depends on the ratio [$$l\_2/l\_1$$], where [$$l\_1$$] is the distance between a constraint surface to the supporting base and [$$l\_2$$] is the distance between the constraint surface and each pin’s free end. (b) Simulated result of human’s skin deformation caused by partial displacement of haptic pins, showing the boundary area deformed more than the inner one. (c) Experimental result of pins’ end movement taken by a high speed camera, showing the different displacements under only one horizontal actuation of the supporting base. 1 Introduction During touching or haptic exploration, humans can sense various modalities of real world, from roughness to temperature and pain [1]. Stress/strain, vibration, and heat conduction are of important mechanical factors for realization of stable manipulation in human and robots. Among haptic exploration action, sliding over the surface of an object can facilitate the responses of mechanoreceptor at various mechanical frequencies, providing rich information about the object than other actions such as indentation, poking, or holding. In contrast to the overt (or total) sliding action in haptic exploration, the pre-slide phase or incipient slippage occurring before and at the onset of slippage is crucial for stable manipulation. Partial micro-slippage can occur at the contact area while holding an object that is about to slip out of one’s hand, with the overt slip occurring only when microslippage erodes the entire contact area. This phenomenon, called localized slippage, is dominant during the pre-slide phase or incipient slip period [2]. Vibrations caused by localized slippage during the pre-slide phase activate mechanoreptors, i.e. the slip perception is assessed by the individual, allowing an individual to react quickly to the incipient slippage by applying a grip force to stop its motion. Although considerable research has been devoted to understand human perception of total slippage, few studies have attempted to clarify the perception of slippage during the pre-slide phase. Many studies have analyzed slippage and its associated sensations, including vibrations and stretching of the skin of robot and human fingertips. Slip perception can provide considerable information on, for example, a surface’s tribology caused by skin stretching, which is mediated through Meissner corpuscles and Ruffini endings, corresponding to FA-I and SA-II afferent nerves, respectively [3, 4]. A slip event is also considered crucial for human hand dexterity in manipulating objects, as humans can effortlessly adjust their gripping force based on slippage or the onset of slippage sensed at the contact area between the fingers and the grasped object [4]. A study of the dynamics of slip onset on human fingertips under various contact conditions, conducted by Delhaye et al. [5], showed the partial displacement on the fingertip pad before overt slippage occurs, a finding similar to that reported previously. Nonetheless, none of these research could represent this phenomenon in simulation. We previously proposed a model for the theoretical study of this phenomenon on human fingertips, which we called Localized Displacement Phenomenon (LDP). LDP indicates partial micro-slippage on the contact surface of soft objects during the pre-slide phase, with slippage occurring initially at the boundary of the contact surface, subsequently eroding the central area when relative movement continues [2]. Overt slippage can occur only when micro-slippage dominates the entire contact surface. Micro-slippage occurring locally on the contact surface during the pre-slide phase is a dominant phenomenon, helping humans assess incipient slippage and generate suitable gripping force to prevent total movement of the grasped object. The LDP was then utilized for development of a novel haptic display [8]. This research consisted of conducting psychophysical experiments on human slippage perception using our previously proposed haptic display device [8]. We also constructed a dynamic model for investigation of strain/stress distribution on fingertip pad under interaction with the haptic display device. As our device enables variations in localized slippage patterns on human fingertips only by changing the shape of haptic pins’ supporting base, we compared theoretical and perceptual methods of design optimization. In addition, this study characterizes human slippage perception in response to various applied loads. These results may be important for ultimately understanding human slippage perception and enable its application in human-robot interactions and tele-operation systems. 2 Design of the Haptic Display 2.1 Setup Figure 1 briefly describes the idea utilized for development of this device. If the movement of one haptic pin’s is indicated by [$$s\_1$$], with its first actuating end A moving on a surface [$$\alpha $$], a constrained point B located [$$l\_1$$] from A ([$$l\_1 < l$$]), and the other end C free to move along the surface [$$\beta $$], the distance between B and C would be [$$l\_2 = l - l\_1$$]. As we focused only on a short movements of A and C, we assumed that both surfaces, [$$\alpha $$] and [$$\beta $$], were perpendicular to the pin’s straight posture. If the actuating end A is displaced by a stroke length [$$s\_1$$], the resultant movement of pin C would be proportional to the ratio [$$\psi =l\_2/l\_1$$]. Thus, if C is chosen to stretch the skin on the fingertip, and if pin design (length and constraint position) were optimized, it would be possible to generate various movements of multiple pins with a minimum number of actuation. [] Fig. 2. Design of the proposed haptic display in a general case. (a) Human holds the device by making contact with haptic pins which are put through on a constraint surface and rested on a supporting base. The supporting base is activated to move in parallel with the fingertip’s surface, resulting localized displacement of haptic pins on the fingertip. (b) Constraint surface is machined with holes for pins to be put through, and the supporting base with swallow holes for pins to rest. (c)–(f) Cross-sectional sketch of possible designs of the supporting base. Each design results in different pattern of localized displacement of pins’s free end on human fingertip [8]. Our ultimate purpose is to build a slip display utilizing the above description that can be attached to commercial haptic devices for tele-operated task. Figure 2(a) illustrates a typical design of this device in a general case. Human fingertip FI make contact with bundle of haptic pins 6. These pins are put through holes 71 on the constrain surface 7 that is drawn in Fig. 2(b). Constrain surface 7 is fixed by four pillars that attached on the main base 2. Each pin 60 has one end 60a resting on the supporting base 5, while another end 60b makes contact with human finger pad. The supporting base 5, which is crucial for generate various displacement patterns of haptic pins, is typically a concave base 51 with many holes micromachined on the concave surface 52 (Fig. 2(b)). Variations of the concave surface 52 of the supporting base 5 could result in distinguishing patterns of localized skin stretch on human fingertip. We have proposed four designs that are illustrated in Fig. 2(c–f). First three designs (c–e) are for generation of the skin stretch that obeys to the idea of LDP, in which pins’ end at the outer area would display more than that of the inner one. Design in Fig. 2(f) devotes to representation of stress distribution that was experimentally investigated in [6]. The design of the supporting base is not limited to the proposed ones, it can be justified based on applications that require specialized patterns of localized displacement/skin stretch on human finger pad. We have built a dynamic model that allows numerical simulation of interactions between haptic pins and human fingertips. This model permits analysis of mechanical characteristics during physical contact, such as applied force and lateral skin stretching generated by the movement of haptic pins on a human fingertip (Fig. 3). The model is an extension of our previous Beam Bundle Model (BBM), which simulated soft contact between human or robotic fingertips and grasped objects, focusing on the incipient slip phenomenon [2]. In the BBM, the soft fingertip (tissue) is considered to be comprised of multiple soft virtual beams, which are bendable and compressible and considered sufficient to describe the deformation of the fingertip during contact. Moreover, the contact surface (or skin) was meshed with a finite element method to represent microslippage that occurs during the incipient slip phase. Stress-strain relation on one element is considered to be viscoelastic, thus a Voigt model (comprised of a spring and damper connected in parallel) is utilized for representation of visco-elasticity. This study also introduced interactions between haptic pins and the contact nodes of a human fingertip. Specifically, each pin’s free end was assigned a node on the meshed contact surface, as well as to the corresponding end of a virtual beam. This model assumed that the bond between the pin’s end and the contact’s surface node would not be broken during lateral displacement, thus corresponding to a pure skin stretching phenomenon. As a result, the lateral movement/force of the pins’ ends caused by the haptic device was considered a geometric constraint/external force to the BBM. Figure 3(a) illustrates the scenario of this simulation and the interactive force acting on a single haptic pin during contact and displacement on a human fingertip. By construction of motion equations of nodes, it is possible to assess dynamic change of stress and strain during interaction of haptic pins and fingertip. [] Fig. 3. Dynamic modeling and simulation results. (a) Beam Bundle Model of the soft fingertip in contact with haptic pins. It is assumed that each haptic pin makes a bond with a fingertip’s virtual beam, thus the pin’s horizontal movement results in shearing on the fingertip. (b) Dynamic simulation of the soft fingertip’s localized shearing displacement over time, showing that the resulted shearing on the fingertip conforms to the LDP [2]. Figure 3(b) shows a simulation result of lateral skin stretch distribution on the fingertip during interaction with the haptic pins. In this simulation, haptic pins at the bordering area are set to give larger lateral force than the inner ones, satisfying the Fibonacci chain rule. We can observe that the resulted dynamic skin stretch distribution on the fingertip is obeying the LDP rule, in which the outer area give larger way than the inner one over time. By using this model, it is possible to obtain different skin stretch distribution by changing arrangement of haptic pins as illustrated in Fig. 1(b). As a result, by suitable arrangement of the haptic pins, we can generate different patterns of skin stretch on human fingertip, thus the tactile perception of localized slippage can mediated by mechanoreceptor underneath the skin of the fingertip. Detailed derivation would be shown in the final version. [] Fig. 4. Simulation results: (a) Distribution of localized lateral displacement on the contact surface caused by haptic pins. (b) Dynamic response of lateral force acting on contact node caused by haptic pins. Yellow lines are corresponding to nodes at the boundary area of the contact pad. Orange lines are with the middle area, while the blue lines belongs to the central area of the contact pad. Figure 4(a) also illustrates the simulated distribution of localized displacement on the entire fingertip pad during operation of the proposed device. This simulation allowed estimation of the movement of the neighboring area around a single haptic pin. The overlap in skin areas observed between neighboring haptic pins indicates that the space between pins should be designed properly to minimize overlapping areas. For example, in Fig. 4(a), each pin is surrounded by a space of 5 mm, resulting in maximum displacement of bordering ping of 3 mm. Because the generated displacements were not large, the space between haptic pins could be reduced. This model may therefore help optimize arrangements of pins depending on actual applications. The model could also determine the dynamic responses of lateral forces of nodes on the contact surface caused by the bundle of haptic pins, with each plot’s color corresponding to its position on the contact area (Fig. 4(b)). In this configuration, pins are displaced from zero to a predetermined value at 1 s, increasing lateral forces. Due to the small magnitude of displacement, there was little damping effect on these plots. Rather, the plots increased almost linearly. This is also similar to a conclusion in [7], in which authors considered the skin behavior is almost linear at small strain of traction. [] Fig. 5. Experimental setup. Volunteered subject sit in front of a desk on which placed the haptic device. The finger is rested on the hinge and made contact to the bundle of pins. Under a translation of the concave base, pins’ free end moved laterally differently, resulting the localized displacement on the fingertip pad. The entire device is covered by a carton box that prevents subject from seeing the its movement. Headphone with white noise is to block sound from device’s motorized movement. The inset picture shows the picture of subject’s fingertip making contact on the device’s pins. Direction of movement is categorized in four groups: D: Distal, P: Proximal, U: Ulnar, and R: Radial. 3 Experiments 3.1 Experiment Setup The grounded haptic display device used in the experiment is illustrated in Fig. 5. This device consisted of: 1. 1. A linear movement, driven by a linear motorized system (XMG650, Misumi, Japan), providing a minimum stroke of 2 [$$\upmu $$]m.   2. 2. A 6-dof loadcell, which can measure total force/torque exerted on a human fingertip during the experiment (BL-tech Ltd., Japan).   3. 3. A supporting base, containing micro-machined holes on which the actuating ends of the pins were located. Because the free ends of the haptic pins could move in any direction, the holes must act as spherical joints in constraining the free ends.   4. 4. A constraint plane (i.e. a transparent acrylic plate), with many small holes for insertion of the pins. This plane constrains the spatial movement of the pins during the experiment.   5. 5. A bundle of straight, rigid pins with actuating ends resting on the concave base and constrained by the holes on the constraint plane. Each pin is made of copper and has a cross-sectional diameter of 1 mm. The free end of each pin is rounded so that it does not cause pain when making contact with human skin.   6. 6. A hinge, designed to support a human fingertip during the experiments. This hinge is attached to a vertical stage that can be moved vertically, allowing the precise placement of a human fingertip in contact with the free ends of the pins. The tip of the hinge has a slit measuring 15 mm [$$\times $$] 20 mm, similar to the aperture in [9], to expose the pad of the fingertip.   7. 7. A self-developed capacitative button that can switch on and off based on a light touch from a human. This button was prepared for timely and precise capture of human touch for the purpose of indication.   3.2 Psychophysical Test’s Protocol Psychophysical tests were performed on male six volunteers. All were students of our institution, were aged 21 to 25 years, and had no history of tactile disorder/disease. All subjects were informed about the methodology and purpose of this experiment and provided informed written consent. The study protocol and methods were approved by the Ryukoku University Committee Board. Subjects were asked to sit comfortably in front of the device and to rest their fingertip on the hinge. The height of the hinge was adjusted by controlling the motorized vertical linear stage so that the index finger pad made slight contact with the device’s pins. A 2-dof motorized linear stage was employed to provide the pins with planar motion in any direction. The device was covered by a black box, so that the subjects could not see the movement of the device. In addition, subjects wore headphones with white noise so that they could not assess the movement of the linear stage from the sound of the motor. Each subject’s finger was slid, starting from a resting state, to correspond to the three designs of the haptic display device (Fig. 5). During each trial, the speed and direction of movement were varied randomly without informing the subjects. After each experimental trial, the subject was asked to evaluate localized slippage perception using a Visual Analog Scale (VAS) from 0 to 100. Scores of 0–20, 20–40, 40–60, 60–80 and 80–100 were regarded as complete lack of slippage perception, little perception, clear perception, very clear perception, and strong perception, respectively. [] Fig. 6. Visual Analog Score (VAS): result of subjects’ localized slippage perception in four direction of the haptic display device’s movements. 3.3 Perceptual Insights On Different Design. We conducted an experiment to assess whether the change of the display base designs: Spherical shape (Fig. 2(c),) Cone shape (Fig. 2(d),) and Cone shape with cone surface (Fig. 2(e)) would affect the sense of partial slippage on human. Figure 6 shows the VAS score moving speeds (1 mm/s) in four directions of displacement (distal (D), proximal (P), radial (R), and ulnar (U), as illustrated in 5). Subjects in were allowed to feel the slippage of a flat surface traversing the tips of their thumbs and index fingers before the tests. This surface does not result in much localized slippage perception [8]. The result shows that three design types did not differ much in generating localized slippage perception, although mean VAS was usually highest for Cone shape with cone surface. Consequently, despite our previous kinetic analysis, suggesting that the Cone shape with cone surface design may generate a more explicit sense of localized slippage on human fingertips [8], the perceptual results shown above indicate that localized slippage perception does not vary significantly over the three designs. Although this design may have generated slightly better scores and grip force, any of the three designs can be used to assess localized slippage perception. [] Fig. 7. Effect of contact force on VAS. Subjects were asked to keep the applied force unchanged in three levels during perceptual experiments: 0.8 N, 1.5 N, and 2.0 N. At higher load, the VAS are highly evaluated with less reluctance compared to lower load. Effects on Localized Slippage Perception. Subjects were asked to apply three levels of constant force during each trial, 0.8 N, 1.5 N, and 2 N, while controlling the linear stage to displace 2.5 mm at a speed of 1.5 mm/s in the four directions, D, P, R, and U. A large screen was placed in front of each subject, so that they could assess the magnitude of force they were applying to the haptic pins. Each subject performed five trials at each of the three levels of force in each of the four directions, for a total of 180 trials for the three subjects. VAS scores increased when subjects applied more force on the haptic pins, with the VAS being highest in the P-direction at 2 N (Fig. 7). The magnitude of the error bars indicates that VAS is more consistent at higher loads. Although interviews after the experiments showed that the subjects did not feel any discomfort at 2 N, higher applied loads are not recommended to improve localized slippage perception, since higher loads may cause fingertip pain over time. The increased VAS may be due to the relationship between higher load on haptic pins and fingertip pad stiffness. Authors in [7] indicated that the skin would become stiffer at high load. Thus, greater displacement of haptic pins can generate a higher value of lateral force, which can react with mechanoreceptors, especially the Ruffini endings, which are both sensitive to skin stretching and located deep within the dermis. This may result in an enhanced feeling of localized displacement on subjects’ fingertip pads. Subjects were asked to apply a constant force of 1.0 N while controlling the linear stage to displace a distance of 2.5 mm at three different velocities: 1.0 mm/s, 2.0 mm/s, and 3.0 mm/s along the four above mentioned directions. Each subject performed five trials at each of the three velocities, in each of the four directions, for a total of 180 trials for the three subjects. Results of obtained VAS are illustrated in Fig. 8. From these plots, we could assess that the VAS does not change significantly when the velocities increases. In this experiment, we found that the VAS at four directions of sliding tend to decrease when the velocity is over 5 mm/s. Especially, VAS decreases significantly at extremely high speed of movement ([$$>7$$] mm/s). [] Fig. 8. Effect of velocity on VAS. Subjects were asked to keep applied force of 1 N during test, then the velocity of the linear stage was increased. It can be seen that the velocity of displacement does not affect the sense of localized slippage on subjects. 4 Conclusion By proposing a novel haptic display that can represent partial movement on human fingertip thus creating the localized slippage perception on users, it is expected that this research can bring a platform for investigation of human perception on incipient slip, as well as a tool for development of haptic display related to slippage perception. Some remarks can be summarized as below: 1. 1. The localized slip perception (LSP) differs over directions of slide. The distal or proximal directions seem to generate more explicit sense of localized slippage than the radial and ulnar direction.   2. 2. Human’s perception on localized slippage using the proposed device enhances when the gripping force increases. It is due to the fact that at high gripping force, fingerpad gets stiffer, thus lateral skin stretch could result high lateral force, which can stimulate mechanoreceptor underneath the skin, which is similar to a conclusion in [7].   3. 3. LSP does not vary significantly when the movement velocity of haptic pins increased. It means that the speed of spread of localized displacement on human fingerpad has least relation to perception of slippage. This is also identical to that mentioned in [6].   4. 4. Perception of sliding direction can be generated using the proposed design. In addition, this perception is enhanced at higher gripping force. Moreover, sliding velocity does not affect this perception, since it can be scaled over a wide range of speed.   5. 5. Perception of localized slippage varies when the design of the supporting base 5 changes. As a result, other researchers can utilize this design for development of devices that can generate specialized patterns of localized shearing on the human fingerpad for tactile display purpose.   In the future, we attempt to further investigate effect of partial slippage perception on human using the proposed device with more psycho-physical tests. In addition, a light-weight, wearable device will be designed for accompanying with the commercial haptic display to form a complete tele-operated system for controlling remote robot hands with slip feedback and display. Acknowledgments This work is supported by Grant in Aid for Scientific Research KAKENHI project number 15H06739. References 1. Linden, D.J.: Touch: The Science of Hand, Heart, and Mind, Chap. 2. Viking Press, New York (2015) 2. Ho, V.A., Hirai, S.: A novel model for assessing sliding mechanics and tactile sensation of human-like fingertips during slip action. Rob. Auton. Syst. (Elsevier) 63(3), 253–267 (2015)CrossRef 3. Srinivasan, M.A., Whitehouse, J.M., LaMotte, R.H.: Tactile detection of slip: surface microgeometry and peripheral neural code. J. Neurophysiol. 63, 1323–1332 (1990) 4. Birznieks, I., Jenmalm, P., Goodwin, A.W., Johansson, R.S.: Encoding of direction of fingertip forces by human tactile afferents. J. Neurosci. 21, 8222–8237 (2001) 5. Delhaye, B., Lefevre, P., Thonnard, J.L.: Dynamics of fingertip contact during the onset of tangential slip. J. R. Soc. Interface 11, 20140698 (2014)CrossRef 6. Delhaye, B., Berrea, P.A., Edin, B.B., Lefevre, P., Thonnard, J.L.: Surface strain measurement of fingertip skin under shearing. J. R. Soc. Interface 13, 20150874 (2016)CrossRef 7. Wang, Q., Hayward, V.: In vivo biomechanics of the fingerpad skin under local tangential traction. J. Biomech. 40, 851–860 (2007)CrossRef 8. Ho, V.A., Honda, H., Hirai, S.: Development of a novel slip haptic display device based on the localized displacement phenomenon. IEEE Rob. Autom. Lett. 1, 585–592 (2016)CrossRef 9. Gleeson, B.T., Stewart, C.A., Provancher, W.R.: Feedback, improved tactile shear: tactor design and an aperture-based restraint. IEEE Trans. Haptics 4(4), 253–262 (2011)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_40 Fit for Purpose? Predicting Perception Performance Based on Past Experience Corina Gurău¹  , Chi Hay Tong¹   and Ingmar Posner¹   (1) Mobile Robotics Group, University of Oxford, Oxford, UK     Corina Gurău (Corresponding author) Email: corina@robots.ox.ac.uk   Chi Hay Tong Email: chi@robots.ox.ac.uk   Ingmar Posner Email: ingmar@robots.ox.ac.uk Abstract This paper explores the idea of predicting the likely performance of a robot’s perception system based on past experience in the same workspace. In particular, we propose to build a place-specific model of perception performance from observations gathered over time. We evaluate our method in a classical decision making scenario in which the robot must choose when and where to drive autonomously in 60 km of driving data from an urban environment. We demonstrate that leveraging visual appearance within a state-of-the-art navigation framework increases the accuracy of our performance predictions. Keywords Robot perceptionObject detectionPerformance prediction 1 Introduction Reliable robot perception is a difficult yet fundamental problem, as robots interact directly with the world and any misconduct can have adverse consequences. Our goal is to equip a robot with the introspective capability of predicting when the operational environment is challenging and its perception system is underperforming. Such a high level understanding of the operational environment constitutes a useful diagnostic tool for any decision making agent. Just as humans have the ability to anticipate a difficult road situation, such as an approaching busy intersection or a narrow and crowded street, the robot should be equipped with the ability to forsee its perceptual shortcomings. While significant effort is being devoted to building highly performant perception systems [1–3], the problem of predicting their failure in action is, to the best of our knowledge, overlooked. As robots operate in complex, continuously-evolving, dynamic workspaces, it is critical to analyse and predict how robustly their perception systems operate at any given moment in time. Our work is additionally motivated by our previous observations that perception performance for mobile robots is environment-dependent. In some places of operation performance is excellent, while in others failure occurs more often [4]. We attribute this to the vicissitudes of the environment – changes in appearance due to external factors such as weather or illumination conditions. In this work we propose to model the robot’s perception capabilities using a probabilistic framework. Our goal is to allow the robot to drive autonomously only when it is confident of its performance and require human assistance otherwise. Some examples of this scenario can be seen in Fig. 1. Requiring a human to intervene in an autonomous operation falls under the autonomy on offer paradigm, in which the robot offers autonomy only when it is extremely confident in its capabilities and hands over control to a human otherwise. More specifically, the contributions of this work are: - Introducing performance records: a probabilistic framework used to incorporate place-specific performance estimates gathered over time, which allow the robot at test time to estimate the likelihood of the perception system making a mistake. - The description of two modalities for using performance records, one of which makes use of the visual appearance of a place. - A classical decision making scenario which allows the robot to take an optimal action regarding offering autonomy. 2 Related Work There are several works that touch upon the fluctuating performance levels of a robot during operation. However, we believe to be the first to estimate the likelihood of success of a vision system by modelling its outcome as a function of space and time. The system we propose is deeply relevant to the work of [5] who describe the sensitivity of object detectors to factors such as weather and location and train local experts by incorporating place specific hard negative examples in the training procedure. When data the robot is unlikely to encounter during operation is replaced with mistakes, they are able to significantly improve the detection results. Unreliable perception performance has also been observed by [6] and [7] who attribute it to sensor data integrity and analyse the effects of challenging operational conditions on the perceptual integrity of the robot. The works of [8, 9] identify the use of biased training datasets as a cause for poor generalisation performance to new testing conditions. Similar problems are reported for localisation performance. [10, 11] propose embedding spatial models of expected localiser performance in localisation maps in order to aid trajectory planners. [] Fig. 1. Example data encountered by a robot as it traverses an urban environment in the proximity of pedestrians, cyclists and other road users. On some sections of the road on which it belives its perception system is underperforming the robot can ask to switch control back to a human operator. This higher-level characterisation of when and where an algorithm fails is similar in spirit to the concept of introspection introduced in [12]. In that work, the authors looked at the introspective capacity of different classification frameworks, which refers to a classifier’s ability to assign an appropriate measure of confidence to any test data. Mistakes are not considered catastrophic when they are made with high uncertainty as this gives the system the ability to ask for help and correct itself. Our framework is independent of the classification algorithm. It bears some similarity with [13], which introduces ALERT, a system used to predict the accuracy of a computer vision system on various tasks. We share with ALERT an aspiration to prevent failure by flagging a warning when predicting that performance will be low. However, our work stands apart from that of [13] as our approach is tailored specifically to robot perception by exploiting location and past experiences of a place. These provide useful contextual information, which can improve the robot’s decision making capabilities. 3 Approach We rarely allow robots to drive autonomously somewhere totally new. In fact, most successful autonomous operation techniques exploit the fact that the robot often traverses the same workspace over and over again [14]. If a robot has traversed a route in the past, then we would like to leverage its past experience in order to predict the robot’s performance in subsequent visits of the same place. Based on these predictions, we would like the robot to offer autonomy only if its estimates of performance are high, and deny it otherwise. Figure 2 shows how we leverage past information: we drive the same route multiple times and gather performance estimates along it. Specifically, what we estimate in this paper is the image-based pedestrian detection outcome. In order to achieve this we need to address the following: - estimating detection performance at a particular location - formulating offering/denying autonomy as a decision making problem [] Fig. 2. A new traversal (black line) of a route which has been travelled previously (grey lines) can make use of past estimates of detection performance. For instance, at Location A where we have repeatedly observed false positive detections, the performance record yields a low probability of success for the detector, while at Location B, where the detector has only produced true positive detections, the probability of success is very high. 3.1 Building Performance Records We consider the environment (the place of operation) to be an underlying hidden influence on the detection outcome. For a traversal T of a route, we denote as [$$T\_i$$] the [$$i^{th}$$] location along it. We define [$$\theta \_i$$] as the probability that at [$$T\_i$$] the detection system will be successful, and we model it as a random variable with the probability [$$p(\theta \_i)$$] assumed to be a beta density of the form [$$\begin{aligned} p(\theta \_i; \alpha , \beta ) = \dfrac{1}{B(\alpha ,\beta )} \theta \_i^{\alpha -1} (1-\theta \_i)^{\beta -1}, \quad 0 \le \theta \_i \le 1, \end{aligned}$$] (1) where [$$\alpha > 0$$], [$$\beta > 0$$] and [$$B(\alpha ,\beta )$$] is the Beta function. Our canonical prior at a new location that we see for the first time and where we have no knowledge of the success of the detector is given by [$$\alpha =1, \beta =1$$]. As the robot traverses the route, at each location [$$T\_i$$] it observes a set of detections: true positive, false positive and false negative respectively. They represent the success or failure of the detection system and we record them as binary observations [$$x\_i^j \in \{0,1\}$$] such that: [] (2) We let the observations x be modelled by a Bernoulli random variable: [$$ x \sim Ber(\theta )$$] with probability mass function: [$$\begin{aligned} p(x; \theta ) = \theta ^x(1-\theta )^{1-x}, \quad x \in \{0,1\}. \end{aligned}$$] (3) We additionally make the assumption that the set of observations [$$X\_i = \{x\_i^1, x\_i^2, ...,x\_i^{n\_i}\}$$] are conditionally independent given the probability of success [$$\theta \_i$$], and express the likelihood of successful performance for a particular location [$$T\_i$$] as: [$$\begin{aligned} p(X\_i|\theta \_i) \propto \prod \_{j=1}^{n\_i}p(x\_i^j|\theta \_i) \propto \theta \_i^{k\_i}(1-\theta \_i)^{n\_i-k\_i}, \end{aligned}$$] (4) where [$$k\_i$$] represents the number of observations indicating good performance ([$$x\_i=1$$]) out of a total of [$$n\_i$$] observations at location [$$T\_i$$] along the route. Using Bayes Theorem, we calculate the probability of the detector being successful at location [$$T\_i$$] as: [$$\begin{aligned} p(\theta \_i|X\_i) = \frac{p(X\_i|\theta \_i)p(\theta \_i)}{\int \_{\theta \_i}p(X\_i|\theta \_i)p(\theta \_i)} \end{aligned}$$] (5) Since the Beta distribution is a conjugate prior to the Bernoulli distribution, the posterior [$$p(\theta \_i|X\_i)$$] is also a Beta distribution. The hyperparameters of the posterior are updated as: [$$\begin{aligned} \widehat{\alpha \_i} = \alpha + k\_i, \quad \widehat{\beta \_i} = \beta + n\_i - k\_i \end{aligned}$$] (6) This gives us a simple procedure for incorporating observations over time. We refer to all [$$p(\theta \_i; \widehat{\alpha }, \widehat{\beta })$$] at locations [$$T\_i$$] as the performance record of the detection system on a chosen route after traversal T and use it to estimate the likely performance of the robot at test time. 3.2 Decision Making Using a Performance Record Using Bayesian decision theory we can translate the posterior probability of performance into optimal actions. In this paper we focus on the case in which the robot can take either of the following two actions: [$$a^0$$], denying autonomy or [$$a^1$$], offering autonomy at every location along a test route. The robot should choose action [$$a^0$$] when it believes that its perception system is failing and a human operator should take over control and it should choose action [$$a^1$$] when it believes that its perception system is functioning well and it can reliably operate autonomously. We make the simplifying assumption that there are only two states that the perception system can be in: failing (and producing false detections), or performing well (and the robot presents no risk when operating autonomously). In order to discriminate between the two states, we introduce hyperparameter [$$\tau $$] and denote by [$$s^0$$] the event that the perception system is failing at location [$$L\_i$$]. We compute its probability as [$$\begin{aligned} p(s^0|\theta , \tau ) = p(\theta \le \tau ) = \int \_{0}^{\tau }p(\theta ; \widehat{\alpha }, \widehat{\beta }) \mathop {}\!\mathrm {d}\theta , \end{aligned}$$] (7) where [$$p(\theta ; \widehat{\alpha }, \widehat{\beta })$$] has been estimated using the performance records proposed. We denote by [$$s^1$$] the event that the perception system is performing well and compute the probability of it happening as [$$p(s^1|\tau ) = 1 - p(s^0|\tau )$$]. In order to select an optimal action, we associate a loss to each of the event-action pairings, which reflects how serious it is to take action [$$a^i$$] when the actual state is [$$s^j$$], for [$$i,j \in \big \{0,1\big \}$$]: [$$ L(a, s) = \begin{pmatrix} 0 &{} \text {L}\_{\text {offer}} \\ \text {L}\_{\text {deny}} &{} 0 \end{pmatrix} $$] We choose the action which minimises the expected loss computed as [$$\begin{aligned} \overline{L}\_{\tau }(a) = \sum \_{i}p(s^{i})L(a, s^{i}). \end{aligned}$$] (8) In our scenario, denying autonomy and asking for help (even if un-neccessary) is more desirable than driving autonomously while the perception system is performing poorly, as the latter can have catastrophic consequences. In Fig. 3 we show the effect of adjusting the losses associated with each type of error on the actions selected. Type I, or false positive errors, correspond to the situations in which the robot denies autonomy ([$$a^0$$]) but its perception system is in reality performing well ([$$s^1$$]) and incur a loss of [$$\text {L}\_{\text {deny}}$$]. Type II, or false negative errors, occur when the robot fails to recognise that it is underperforming ([$$s^0$$]) and continues to operate autonomously ([$$a^1$$]). Figure 3 shows that by making type I errors more expensive (increasing [$$\text {L}\_{\text {offer}}$$]), the robot employs the safer action of denying autonomy more often. [] Fig. 3. Figure shows the expected loss of choosing an action for a posterior distribution [$$p(\theta |x)$$] and two different loss matrices used. When [$$\text {L}\_{\text {offer}} = \text {L}\_{\text {deny}}$$], for [$$\tau = 0.6$$] (grey line), the action chosen by the robot is to offer autonomy because it has a lower expected loss [$$\overline{L}\_{\tau =0.6}$$]. However, by setting [$$\text {L}\_{\text {offer}} = 3\times \text {L}\_{\text {deny}}$$], the optimal action becomes to deny autonomy. Increasing [$$\text {L}\_{\text {offer}}$$] creates a more cautious system that will offer autonomy less often. [] Fig. 4. Figure showing the platform and the route chosen for experiments. The vehicle is equipped with a Bumblebee3 stereo camera, Velodyne Lidars HDL32E and an INS system used for data collection. We produce both 2D and 3D pedestrian detections in image and laser data along the route in Milton Keynes shown on the right. 3.3 Performance Records and the Experience Paradigm In order to assign different observations to the same location we use geographical proximity given by GPS measurements. While this distance metric is useful for gathering all the observations close to a desired location, it does not take into account which of them are most relevant. Imagine the following test case: while driving at night, past observations gathered during night time should be more relevant than observations gathered during day time. Similarly, detection in bright sunny conditions might have a different outcome than detection during rain. In these situations having a distance metric that also incorporates visual similarity is crucial. Here is where Experience-Based Navigation (EBN) comes in. EBN [15, 16] is an ideal framework for our problem as it selects, through a camera-based localisation system, which of the past appearances of a location most resemble what the robot is experiencing at test time. In order to do this, EBN distinguishes between different visual appearances of a place and, as any vision based feature matching system, it works better at matching images when visual features are common. We hypothesise that visual features similar enough for localisation will produce a similar detection outcome. We denote the method of estimating performance using all past observations, regardless of the visual appearance of the environment by LOC, since it only incorporates observations that are close in location. We denote a second method, which leverages EBN to select observations from locations that are close both in physical distance and visual appearance by APP. We expect the second method to give better estimates of performance as it accounts for more than structural changes of environment (different locations) but also for appearance changes caused by lighting, weather, or even time of the day, that can significantly influence a detection system. Estimating performance on a given image first requires localising it against an EBN map and returning the highest scoring localisation candidates. With APP, we build the performance record using observations from these candidates only. We refer the reader to [16] for a comprehensive description of the EBN framework employed. 4 Experiments and Results We evaluate the two methods proposed for estimating performance, LOC and APP, on 60 km of driving data gathered in an urban environment in Milton Keynes over the course of six months. The same route has been traversed eight times under different environmental conditions using the data collection platform shown in Fig. 4 and comprise a total of 70 k image frames. Some examples can be seen in Fig. 1. Since manually annotating such large datasets requires a considerable effort, we make use of a surrogate metric of performance which evaluates the pedestrian detections against laser detections in order to obtain observations neccessary for building the performance record. The image detector used for the experiments presented in this paper is a support vector machine on Aggregate Channel Features (ACF) [17] trained on the INRIA Person dataset [18] following best practice. The laser detector used for providing a surrogate ground truth metric was trained on KITTI Velodyne data [19] and achieves high levels of performance as described in [20]. Note that although we require the laser sensor in order to build the performance record at training time, we do not require the sensor at test time. We estimate performance and take optimal actions either using only the performance record and the location of the robot (required by LOC), or using the performance record and the incoming image feed (required by APP). In order to evaluate the accuracy of the estimates of performance given by LOC and APP, we analyse the number of wrong decisions the robot takes while employing them. Each image frame that the robot records while driving a test trajectory is used in order to take one of the two decisions: to offer autonomy or to deny it, as described in Sect. 3. What we refer to as mistakes are the outcomes of the following two cases: - Choosing to deny autonomy when there are no false positive and no false negative detections in an image (detector performance is perfect but the robot asks for help). These errors are of type I. - Choosing to offer autonomy when there is at least one false detection in an image (detector performance is not perfect but the robot decides to drive autonomously). These errors are of type II. [] Fig. 5. (a) Figure showing the percentage of total mistakes made by the robot with varying hyperparameter [$$\tau $$]. For almost all values of [$$\tau $$], APP has a lower total percentage of mistakes than LOC. (b) Figure showing the percentage of the route that the robot offers autonomy on. The shaded regions in both plots indicate one standard deviation from the mean. Figure 5 shows the results obtained in an evaluation of all traversals in a leave-one-out fashion and an equal cost ([$$\text {L}\_{\text {offer}} = \text {L}\_{\text {deny}}$$]) for each type of mistake. We show APP having a lower total number of mistakes than LOC (Fig. 5(a)) and offering autonomy in a lot more frames (Fig. 5(b)), for high values of [$$\tau $$] which are the most desirable to use in operation. We attribute this to the fact that APP selects similar observations based on both appearance and proximity, while LOC selects observations based on proximity only. Note that at lower values of [$$\tau $$], both methods are more permissive of driving which leads to more false negative mistakes (failing to recognise that the perception system is operating poorly), while at higher values of [$$\tau $$], they deny autonomy more often which leads to more false positive mistakes (stopping the vehicle from driving despite it having good performance). Figure 5(a) also shows the total percentage of mistakes produced by always offering autonomy (Always-yes) and always denying autonomy (Always-no), which are both considerably higher than the methods proposed. This encourages us to believe that if we allow the robot to deny autonomy occasionally, rather than demanding it at times, the overall performance on a task is improved. Table 1. Percentage of mistakes (Type I, Type II) and percentage of route driven autonomously (A) shown for the two methods proposed when 2 different loss matrices are used. The value of [$$\tau $$] (hyperparameter at which the action is taken) is set to 0.5. In bold we show that APP has a better outcome than LOC in all cases except for type II errors, which we discuss in the text. +-----+------------------------------------------------------------+-------------+-------+--------------------------------------------------------------------+-------------+-------+ |   | [$$\text {L}\_{\text {offer}} = \text {L}\_{\text {deny}}$$] | | | [$$\text {L}\_{\text {offer}} = 3\times \text {L}\_{\text {deny}}$$] | | | +:====+:===========================================================+:============+:======+:===================================================================+:============+:======+ | | Type I (%) | Type II (%) | A(%) | Type I (%) | Type II (%) | A(%) | +-----+------------------------------------------------------------+-------------+-------+--------------------------------------------------------------------+-------------+-------+ | LOC | 39.01 | 2.27 | 11.70 | 42.75 | 0.78 | 6.47 | +-----+------------------------------------------------------------+-------------+-------+--------------------------------------------------------------------+-------------+-------+ | APP | 17.28 | 15.94 | 47.10 | 30.39 | 8.26 | 26.41 | +-----+------------------------------------------------------------+-------------+-------+--------------------------------------------------------------------+-------------+-------+ Figure 5(b) shows that for an equal cost on type I and type II errors APP is less conservative than LOC and prompts the robot to offer autonomy more often. This is an important advantage as encouraging the robot to take either action can be achieved by adjusting the [$$\text {L}\_{\text {offer}} / \text {L}\_{\text {deny}}$$] ratio such that the action which incurs a lower cost will be selected more often (as demonstrated by Fig. 3). Table 1 shows that by increasing the cost of [$$\text {L}\_{\text {offer}}$$], type II errors for both methods are reduced. Note that in this comparison it appears that LOC makes fewer type II errors. This is because type II errors are computed strictly on the frames on which the decision taken was to offer autonomy, which is to begin with lower for LOC. The percentage of autonomy offered is shown in the table as [$$A(\%)$$]. When instead we compute the mistakes made by the two methods for the same percentage of the route driven autonomously (set to [$$30\%$$], [$$50\%$$] and [$$70\%$$] respectively) APP makes both fewer type I and type II errors. This result is shown in Table 2 for the case of [$$\text {L}\_{\text {offer}} = \text {L}\_{\text {deny}}$$]. Table 2. APP makes fewer type I and type II errors than LOC for an equal percentage of the route driven autonomously ([$$30\%$$], [$$50\%$$] and [$$70\%$$]). +-----+---------------------+-------------+---------------------+-------------+---------------------+-------------+ |   | [$$30\%$$] autonomy | | [$$50\%$$] autonomy | | [$$70\%$$] autonomy | | +:====+:====================+:============+:====================+:============+:====================+:============+ | | Type I (%) | Type II (%) | Type I (%) | Type II (%) | Type I (%) | Type II (%) | +-----+---------------------+-------------+---------------------+-------------+---------------------+-------------+ | LOC | 25.88 | 12.99 | 19.13 | 18.39 | 4.6 | 33.07 | +-----+---------------------+-------------+---------------------+-------------+---------------------+-------------+ | APP | 21.73 | 12.57 | 17.28 | 15.94 | 4.2 | 29.37 | +-----+---------------------+-------------+---------------------+-------------+---------------------+-------------+ 5 Conclusions This work proposes a framework for estimating the robot’s perception performance at test time based on its performance at previous visits of the same place. Through a classical decision making scenario, we demonstrate that it is possible to reduce the number of mistakes the robot makes by denying autonomy when the performance is predicted to be poor. Selecting past observations from similar environmental conditions further improves our estimates. We believe that performance records can improve with more experience in the same workspace and represent a step towards reliable vision systems operating in the real world. Acknowledgements The authors gratefully acknowledge the support of this work by the European Community’s Seventh Framework Programme under grant agreement no FP7-610603 (EUROPA2) and by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/J012017/1. The authors would also like to thank Dushyant Rao for his helpful suggestions. References 1. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Neural Information Processing Systems (NIPS) (2015) 2. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition (2015) 3. Badrinarayanan, V., Kendall, A., Cipolla, R.: Segnet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint 2015. arXiv:​1511.​00561 4. Hawke, J., Gurau, C., Tong, C.H., Posner, I.: Wrong today, right tomorrow: experience-based classification for robot perception. In: Field and Service Robotics (FSR), June 2015 5. Gurau, C., Hawke, J., Tong, C.H., Posner, I.: Learning on the job: improving robot perception through experience. In: Neural Information Processing Systems (NIPS) Workshop on Autonomously Learning Robots, Montreal, Quebec, Canada, 12 December 2014 6. Peynot, T., Underwood, J., Scheding, S.: Towards reliable perception for unmanned ground vehicles in challenging conditions. In: IROS, October 2009 7. Peynot, T., Scheding, S., Terho, S.: The marulan data sets: multi-sensor perception in a natural environment with challenging conditions. Int. J. Robot. Res. 29(13), 1602–1607 (2010) 8. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: CVPR 2011, June 2011 9. Khosla, A., Zhou, T., Malisiewicz, T., Efros, A.A., Torralba, A.: Undoing the damage of dataset bias. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7572, pp. 158–171. Springer, Heidelberg (2012). doi:10.​1007/​978-3-642-33718-5\_​12 CrossRef 10. Churchill, W., Tong, C.H., Gurau, C., Posner, I., Newman, P.: Know your limits: embedding localiser performance models in teach and repeat maps. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (2015) 11. Dequaire, J., Tong, C.H., Churchill, W., Posner, I.: Off the beaten track: predicting localisation performance in visual teach and repeat. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, May 2016 12. Grimmett, H., Triebel, R., Paul, R., Posner, I.: Introspective classification for robot perception. Int. J. Robot. Res. (IJRR) (2015) 13. Zhang, P., Wang, J., Farhadi, A., Hebert, M., Parikh, D.: Predicting failures of vision systems. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2014) 14. Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. J. Field Robot. 27(5), 534–560 (2010)CrossRef 15. Churchill, W., Newman, P.: Experience-based navigation for long-term localisation. Int. J. Robot. Res. (IJRR) 32(14), 1645–1661 (2013)CrossRef 16. Linegar, C., Churchill, W., Newman, P.: Work smart, not hard: recalling relevant experiences for vast-scale but time-constrained localisation. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, May 2015 17. Dollar, P., Appel, R., Belongie, S., Perona, P.: Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 36(8), 1532–1545 (2014)CrossRef 18. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, CA, USA (2005) 19. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the kitti dataset. Int. J. Robot. Res. (IJRR) 32(11), 1231–1237 (2013)CrossRef 20. Wang, D.Z., Posner, I.: Voting for voting in online point cloud object detection. In: Proceedings of Robotics: Science and Systems, Rome, Italy, July 2015 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_41 Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion Abhinav Valada¹  , Gabriel L. Oliveira¹, Thomas Brox¹ and Wolfram Burgard¹ (1) Department of Computer Science, University of Freiburg, Freiburg Im Breisgau, Germany     Abhinav Valada Email: valada@cs.uni-freiburg.de Abstract Semantic scene understanding of unstructured environments is a highly challenging task for robots operating in the real world. Deep Convolutional Neural Network architectures define the state of the art in various segmentation tasks. So far, researchers have focused on segmentation with RGB data. In this paper, we study the use of multispectral and multimodal images for semantic segmentation and develop fusion architectures that learn from RGB, Near-InfraRed channels, and depth data. We introduce a first-of-its-kind multispectral segmentation benchmark that contains 15, 000 images and 366 pixel-wise ground truth annotations of unstructured forest environments. We identify new data augmentation strategies that enable training of very deep models using relatively small datasets. We show that our UpNet architecture exceeds the state of the art both qualitatively and quantitatively on our benchmark. In addition, we present experimental results for segmentation under challenging real-world conditions. Benchmark and demo are publicly available at http://​deepscene.​cs.​uni-freiburg.​de. Keywords Semantic segmentationConvolutional neural networksScene understandingMultimodal perception This work has partly been supported by the European Commission under FP7-267686-LIFENAV and FP7-610603-EUROPA2. 1 Introduction Semantic scene understanding is a cornerstone for autonomous robot navigation in real-world environments. Thus far, most research on semantic scene understanding has been focused on structured environments, such as urban road scenes and indoor environments, where the objects in the scene are rigid and have distinct geometric properties. During the DARPA grand challenge, several techniques were developed for offroad perception using both cameras and lasers [20]. However, for navigation in forested environments, robots must make more complex decisions. In particular, there are obstacles that the robot can drive over, such as tall grass or bushes, but these must be distinguished safely from obstacles that the robot must avoid, such as boulders or tree trunks. In forested environments, one can exploit the presence of chlorophyll in certain obstacles as a way to discern which obstacles can be driven over [2]. However, the caveat is the reliable detection of chlorophyll using monocular cameras. This detection can be enhanced by additionally using the Near-InfraRed (NIR) wavelength, which provides a high fidelity description on the presence of vegetation. Potentially, NIR images can also enhance border accuracy and visual quality. We aim to explore the correlation and de-correlation of visible and NIR images frequencies to extract more accurate information about the scene. In this paper, we address the segmentation problem in forested environments by leveraging deep up-convolutional neural networks and techniques developed in the field of photogrammetry using multispectral cameras to obtain a robust pixel-accurate segmentation of the scene. We present an inexpensive system to capture RGB, NIR, and depth data using two monocular cameras, and introduce a first-of-a-kind multispectral and multimodal segmentation benchmark. We first evaluate the segmentation using our UpNet architecture, individually trained on various spectra and modalities contained in our dataset, then identify the best performing modalities and fuse them using various Deep Convolutional Neural Network (DCNN) fusion architecture configurations. We show that the fusion approach outperforms segmentation using either one of the modalities. Furthermore, we show that the fusion models trained on an extended version of our dataset containing extreme outdoor conditions such as snow, low-lighting and glare, demonstrate higher robustness than their unimodal counterparts. 2 Related Work In recent years, deep learning approaches have successfully been applied to various robotic vision tasks including object recognition [5, 19], detection [15, 17] and semantic segmentation [1, 10, 11, 14]. For segmentation tasks, Long et al. [11] proposed fully convolutional networks (FCNs) that use pooling layers from a classification network to refine the segmentation produced by deconvolution layers. Oliveira et al. [14] proposed an improved architecture that further increases the efficiency using parameter reduction and additional refinements. Liu et al. [10] introduced a FCN called ParseNet that models global context directly. Kendall et al. [1] proposed another extension to FCNs that improves the efficiency by using pooling indices computed in max-pooling for the upsampling step. Although, DCNNs have achieved state-of-the-art performance in various perception tasks, so far they have only been applied to and demonstrated on standard datasets that primarily contain RGB or at most depth images, collected in ideal conditions without aggressive changes in weather and illumination. There are limited number of DCNN architectures that explore the fusion of multiple modalities or spectra [3, 16, 18]. Eitel et al. [3] proposed a late-fusion approach for object detection using RGB-D data. Their approach utilizes a two-stream convolutional neural network (RGB and colorized depth image), first trained individually on each modality, followed by the fusion of their predictions using a set of inner-product layers. Schwarz et al. [16] proposes a similar approach that uses a two-stream network for RGB-D fusion, where the DCNN is only used for feature extraction followed by an SVM to determine the category, instance, and pose. Socher et al. [18] proposes technique that uses RGB and depth features extracted from a single layer CNN and feeds both representations to a set of recurrent neural networks (RNNs). The concatenation of all the vectors from the RNNs forms the final representation which is then given to a softmax classifier. In contrast to these approaches, our techniques learn highly discriminative features for semantic segmentation. We perform comprehensive evaluations on these two fusion approaches using combinations of multiple modalities and spectra contained in our benchmark. To the best of our knowledge, this is the first work to explore the use of multimodal and multispectral data for end-to-end semantic segmentation. 3 Multispectral Segmentation Benchmark We collected the dataset using our Viona autonomous mobile robot platform equipped with a Bumblebee2 stereo vision camera and a modified dashcam with the NIR-cut filter removed for acquiring RGB and NIR data respectively. We use a Wratten 25 A filter in the dashcam to capture the NIR wavelength in the blue and green channels. Both cameras are time synchronized and frames were captured at 20 Hz. In order to match the images captured by both cameras, we first compute SIFT [12] correspondences between the images using the Difference-of-Gaussian detector to provide similarity-invariance. We then filter the detected keypoints with the nearest neighbours test, followed by requiring consistency between the matches with respect to an affine transformation. The matches are further filtered using Random Sample Consensus (RANSAC) [4] and the transformation is estimated using the Moving Least Squares method by rendering through a mesh of triangles. We then transform the RGB image with respect to the NIR image and crop to the intersecting regions of interest. Although our implementation uses two cameras, it is the most cost-effective solution compared to commercial single multispectral cameras. Figure 1 shows our autonomous robot platform that we used and some examples from our benchmark from each spectrum and modality. We collected data on three different days to have enough variability in lighting conditions as shadows and sun angles play a crucial role in the quality of acquired images. Our raw dataset contains over 15, 000 images sub-sampled at 1 Hz, which corresponds to traversing about 4.7 km each day. Our benchmark contains 366 images with pixel level ground truth annotations which were manually annotated. As there is an abundant presence of vegetation in our environment, we compute global-based vegetation indices such as Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) to extract consistent spatial and global information. NDVI is resistant to noise caused due to changing sun angles, topography and shadows but is susceptible to error due to variable atmospheric and canopy background conditions [7]. EVI was proposed to compensate for these defects with improved sensitivity to high biomass regions and improved detection though decoupling of canopy background signal and reduction in atmospheric influences. For all the images in our dataset, we calculate NDVI and EVI as shown by Huete et al. [7]. Although our dataset contains images from the Bumblebee stereo pair, the processed disparity images were substantially noisy due to several factors such as rectification artifacts, motion blur, etc. We compared the results from semi-global matching [6] to a DCNN approach that predicts depth from single images and found that for an unstructured environment such as ours, the DCNN approach gave better results. In our work, we use the approach from Liu et al. [9] that employs a deep convolutional neural field model for depth estimation by constructing unary and pairwise potentials of conditional random fields. Our dataset is publicly available at http://​deepscene.​cs.​uni-freiburg.​de/​#datasets. [] Fig. 1. Robot platform used for experimentation and sample images from our benchmark showing various spectra and modalities contained in our benchmark. 4 Technical Approach In this section, we first describe our base network architecture for segmenting unimodal images and then elaborate our approaches for learning from multimodal and multispectral images. We represent the training set as [$$S = \{(X\_n,Y\_n),n = 1,\dots ,N\}$$], where [$$X\_n=\{x\_j,j=1,\dots ,|X\_n|\}$$] denotes the raw image, [$$Y\_n=\{y\_i,j=1,\dots ,|X\_n|\},y\_j \in \{0,C\}$$] denotes the corresponding ground truth mask with C classes, [$$\theta $$] are the parameters of the network and [$$f(x\_j;\theta )$$] is the activation function. The goal of our network is to learn features by minimizing the cross-entropy [$$( softmax )$$] loss that can be computed as [$$\mathcal {L}(u,y) = -\sum \limits \_{k} y\_{k} log u\_k$$]. Using stochastic gradient decent, we then solve [] (1) Recently, approaches that employ DCNNs for semantic segmentation have achieved state-of-the-art performance on segmentation benchmarks including PASCAL VOC, PASCAL Parts, PASCAL-Context, Sift-Flow and KITTI [11, 14]. These networks are trained end-to-end and do not require multi-stage techniques. Due to their unique architecture they take the full context of the image into account while providing pixel-accurate segmentations. We build upon our UpNet architecture, following this general principle with two main components: contraction and expansion. Given an input image, the contraction is responsible for generating a low resolution segmentation mask. We use the 13-layer VGG [19] architecture as basis on the contraction side. The expansion side consists of five up-convolutional refinement segments that refine the coarse segmentation masks generated by the contraction segment. Each up-convolutional refinement is composed of one up-sampling layer followed by a convolution layer. We add a rectified linear unit (ReLU) after each refinement and to avoid overfitting and we use spatial dropout after the first and last refinement layers. [] Fig. 2. Our UpNet architecture with up-convolutional layers of size [$$C \times N\_{cl}$$], where [$$N\_{cl}$$] is the number of classes and C is a scalar factor of filter augmentations. The contractive segment of the network contains convolution and pooling layers, while the expansive segment of the network contains upsampling and convolution layers. The inner-product layers of the VGG-16 architecture has 4096 filters of [$$7 \times 7$$] size, which is primarily responsible for relatively slow classification times. We reduce the number of filters to 1024 and the filter size to [$$3 \times 3$$] to accelerate the network. There was no noticeable performance drop due to this change. Recent work have demonstrated improved performance by having variable number of filters as in the contraction segment [13, 14]. We experimented with this relationship and now use a [$$C \times N\_{cl}$$] mapping scheme, where C is a scalar constant and [$$N\_{cl}$$] is the number of classes in the dataset. This makes the network learn more feature maps per class and hence increases the efficiency in the expansion segment. In the last layer we use the number of filters as [$$N\_{cl}$$] in order to calculate the loss only over the useful classes. The structure of our base UpNet architecture is shown in Fig. 2. We train our segmentation network individually on RGB, NIR and depth data, as well as on various combinations of these spectra and modalities, as shown in Sect. 5. To provide a more informative and sharper segmentation, we introduce two approaches; - Channel Stacking: The most intuitive paradigm of fusing data using DCNNs is by stacking them into multiple channels and learning combined features end-to-end. However, previous efforts have been unsuccessful due to the difficulty in propagating gradients through the entire length of the model [11]. - Late-Fused-Convolution: In the late-fused-convolution approach, each model is first learned to segment using a specific spectrum/modality. Afterwards, the feature maps are summed up element-wise before a series of convolution, pooling and up-convolution layers. This approach has the advantage as features in each model may be good at classifying a specific class and combining them may yield a better throughput, even though it necessitates heavy parameter tuning. Figure 3 shows a depiction of both these approaches. Our experiments provide an in-depth analysis of the advantages and disadvantages of each of these approaches in the context of semantic segmentation. [] Fig. 3. Deep fusion architecture configurations proposed. Channel Stacking involves concatenating multiple modalities into channels and learning combined features from the beginning, while late-fused convolution involves individually learning to segment using separate streams, followed by further learning fused representations. 5 Experimental Results In this section, we report results using the various spectra and modalities contained in our benchmark. We use the Caffe [8] deep learning framework for the implementation. Training on an NVIDIA Titan X GPU took about 4 days with cuDNN acceleration. 5.1 Comparison to the State of the Art To compare with the state-of-the-art, we train models using the RGB RSC (Rotation, Scale, Color) set from our benchmark which contains 60,900 RGB images with rotation, scale and color augmentations applied. We selected the baseline networks by choosing the top three end-to-end deep learning approaches from the PASCAL VOC 2012 leaderboard. We explored the parameter space to achieve the best baseline performance. We trained our network with both fixed and poly learning rate policies with a initial learning rate [$$\lambda \_0 = 10^{-9}$$], which can be given as [$$\lambda \_n = \lambda \_0 \times {\left( \frac{1-N}{N\_{max}}\right) }^{c}$$], where [$$\lambda \_n$$] is the current learning rate, N is the iteration number, [$$N\_{max}$$] is the maximum number of iterations and c is the power. We train the network using stochastic gradient descent with a momentum of 0.9 for 300, 000 iterations for each refinement stage. We found the poly learning rate policy to converge faster and yield a slight improvement in performance. The metrics shown in Table 1 correspond to Mean Intersection over Union (IoU), Mean Pixel Accuracy (PA), Precision (PRE), Recall (REC), False Positive Rate (FPR), False Negative Rate (FNR). The time reported is for a forward pass through the network. The results demonstrate that our network outperforms all the state-of-the-art approaches and with a runtime of almost twice as fast as the second best technique (Figs. 4 and 5). Table 1. Performance of our proposed model in comparison to the state-of-the-art +---------------+-------+-------+-------+-------+-------+-------+----------------------------+ | Baseline | IoU | PA | PRE | REC | FPR | FNR | Time | +:==============+:======+:======+:======+:======+:======+:======+:===========================+ | FCN-8 [11] | 77.46 | 90.95 | 87.38 | 85.97 | 10.32 | 12.12 | [$$\sim 255$$] ms | +---------------+-------+-------+-------+-------+-------+-------+----------------------------+ | SegNet [1] | 74.81 | 88.47 | 84.63 | 86.39 | 13.53 | 11.65 | [$$\sim 156$$] ms | +---------------+-------+-------+-------+-------+-------+-------+----------------------------+ | ParseNet [10] | 83.65 | 93.43 | 90.07 | 91.57 | 8.94 | 7.41 | [$$\sim 90$$] ms | +---------------+-------+-------+-------+-------+-------+-------+----------------------------+ | Ours Fixed lr | 84.90 | 94.47 | 91.16 | 91.86 | 7.80 | 7.40 | [$$\sim 52$$] ms | +---------------+-------+-------+-------+-------+-------+-------+----------------------------+ | Ours Poly lr | 85.31 | 94.47 | 91.54 | 91.91 | 7.40 | 7.30 | [$$\mathbf {\sim 52}$$] ms | +---------------+-------+-------+-------+-------+-------+-------+----------------------------+ [] Fig. 4. Comparison of forward pass time with baseline networks. [] Fig. 5. Comparison of per-class IoU of best baseline (Parsenet) with ours. 5.2 Parameter Estimation and Augmentation To increase the effective number of training samples, we employ data augmentations including scaling, rotation, color, mirroring, cropping, vignetting, skewing, and horizontal flipping. We evaluated the effect of augmentation using three different subsets in our benchmark: RSC (Rotation, Scale, Color), Geometric augmentation (Rotation, Scale, Mirroring, Cropping, Skewing, Flipping) and all aforementioned augmentations together. Table 2 shows the results from these experiments. Data augmentation helps train very large networks on small datasets. In our network, we replace the dropout in the VGG architecture with spatial dropout which gives us an improvement of 5.7%. Furthermore, we initialize the convolution layers in the expansion part of the network with Xavier initialization, which makes the convergence faster and also enables us to use a higher learning rate. This yields a 1% improvement. Table 2. Comparison on the effects of augmentation on our benchmark. +--------------+-------+-------+-------+-------+-------+-------+-------+ |   | Sky | Trail | Grass | Veg | Obst | IoU | PA | +:=============+:======+:======+:======+:======+:======+:======+:======+ | Ours Aug.RSC | 90.46 | 84.51 | 86.72 | 90.66 | 44.39 | 84.90 | 94.47 | +--------------+-------+-------+-------+-------+-------+-------+-------+ | Ours Aug.Geo | 89.60 | 84.47 | 86.03 | 90.40 | 42.23 | 84.39 | 94.15 | +--------------+-------+-------+-------+-------+-------+-------+-------+ | Ours Aug.All | 90.39 | 85.03 | 86.78 | 90.90 | 45.31 | 85.30 | 94.51 | +--------------+-------+-------+-------+-------+-------+-------+-------+ 5.3 Evaluations on Multi-Spectra/Modality Benchmark Segmentation using RGB yields best results among all the individual spectra and modalities that we experimented with. The low representational power of depth images causes poor performance in the grass, vegetation and trail classes, bringing down the mean IoU. The results of the unimodal images shown in Table 3 demonstrate the need for fusion. Multispectrum channel fusion such as NRG (Near-Infrared, Red, Green) shows greater performance when compared to their individual counterparts and better recognition of obstacles. The best channel fusion we obtained was using a three channel input, composed of grayscaled RGB, NIR and depth data. It achieved an IoU of [$$86.35\%$$] and most importantly a considerable gain (over [$$13\%$$]) on the obstacle class, which is the hardest to segment in our benchmark. The overall best performance was from the late-fused-convolution of RGB and EVI, achieving a mean IoU of [$$86.9\%$$] and comparably top results in individual class IoUs as well. This approach also had the lowest false positive and false negative rates. Table 3. Comparison of deep fusion approaches. D, N, E refer to depth, NIR and EVI respectively. CF and LFC refer channel fusion and late-fused-convolution. +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ |   | Sky | Trail | Grass | Veg | Obst | IoU | FPR | FNR | +:============+:=====================+:=====================+:=====================+:=====================+:=====================+:=====================+:====================+:====================+ | RGB | 90.46 | 84.51 | 86.72 | 90.66 | 44.39 | 84.90 | 7.80 | 7.40 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | NIR | 86.08 | 75.57 | 81.44 | 87.05 | 42.61 | 80.22 | 10.22 | 9.60 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | DEPTH | 88.24 | 66.47 | 73.35 | 83.13 | 46.13 | 76.10 | 12.76 | 11.14 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | NRG | 89.88 | 85.08 | 86.27 | 90.55 | 47.56 | 85.23 | 7.70 | 7.10 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | EVI | 88.00 | 83.40 | 84.59 | 87.68 | 44.9 | 83.25 | 8.70 | 8.10 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | NDVI | 87.79 | 83.86 | 83.57 | 87.45 | 48.19 | 83.39 | 8.62 | 8.00 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | 3CF RGB-N-D | 89.23 | [$$\varvec{85.86}$$] | 86.08 | 90.32 | [$$\varvec{61.68}$$] | 86.35 | 7.50 | 6.20 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | 4CF RGB-N | 89.64 | 83.37 | 85.83 | [$$\varvec{90.67}$$] | 59.85 | 85.79 | [$$\varvec{7.00}$$] | 7.20 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | 5CF RGB-N-D | 89.40 | 84.30 | 85.84 | 89.40 | 60.62 | 86.00 | 7.20 | 6.80 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | LFC RGB-D | 90.21 | 79.14 | 83.46 | 88.67 | 57.73 | 84.04 | 9.40 | 6.55 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | LFC RGB-N | 90.67 | 83.31 | 86.19 | 90.30 | 58.82 | 85.94 | 7.50 | 6.56 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | LFC RGB-E | [$$\varvec{90.92}$$] | 85.75 | [$$\varvec{87.03}$$] | 90.50 | 59.44 | [$$\varvec{86.90}$$] | [$$\varvec{7.00}$$] | [$$\varvec{5.76}$$] | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ | LFC NRG-D | 90.34 | 80.64 | 84.81 | 89.08 | 56.60 | 84.77 | 7.58 | 7.65 | +-------------+----------------------+----------------------+----------------------+----------------------+----------------------+----------------------+---------------------+---------------------+ 5.4 Robustness Evaluation We performed extensive evaluations on an extended version of our dataset containing adverse conditions including snow, glare, motion blur and low lighting. Figure 6 shows some qualitative results from this subset. It can be seen that each of the spectra performs well in different conditions. Segmentation using RGB images shows remarkable detail, although being easily susceptible to lighting changes. NIR images on the other hand show robustness to lighting changes but often show false positives between the sky and trail classes. EVI images are good at detecting vegetation but show a large amount of false positives for the sky. We retrained the models presented in Table 3 on our adverse conditions dataset. All the models demonstrate improved performance as they learn probable distributions of corruption patterns that occur due to a change in conditions that take place throughout the day and across seasons. Figure 7 shows the improvement in the mean IoU for both the fusion approaches after the addition of the adverse conditions subset. For unimodal data, segmentation with NIR images have the largest improvement of 3.91% mean IoU, followed by a 2.49% improvement for segmentation using RGB images. The model trained using the NIR images also showed a 5.27% decrease in the false-positive rate. For the Channel-stacking approach, segmentation using NRG images yielded the highest mean IoU of 87.27%, which is an improvement of 2.04% compared to the model trained on the dataset without the adverse conditions. Finally, for our Late-fused convolution approach, similar to the results reported in Table 3, segmentation using RGB and EVI yields the overall best results, achieving a mean IoU of 88.16%. Figure 8 shows qualitative comparisons between the segmentation obtained using the RGB model and the two deep fusion approaches that have demonstrated the best results in the quantitative experiments (Channel-stacking: NRG, Late-fused convolution: RGB and EVI). Figure 8(a) shows results in low lighting conditions and in the presence of shadows. In this scenario, models trained on RGB data or using Channel-stacking often have difficulty in identifying the pixels that belong to the trail class. This is especially evident in the images that have very narrow trail paths (Fig. 8(a), (c) and (d)). It can also be seen that the results using the RGB model and the Late-fused Convolution have the highest segmentation granularity, which is noticeable in the segmentation of trees. [] Fig. 6. Segmented examples from our benchmark. Each spectrum provides valuable information. First row shows the image and the corresponding segmentation in highly shadowed areas. Second row shows the performance in the presence of glare and snow. [] Fig. 7. Comparison of deep fusion models trained with and without the adverse conditions subset. Late-fused convolution of RGB and EVI yields the overall best fusion result, followed by of RGB and NIR. Acronyms D, N, E used in the model names refer to depth, NIR and EVI respectively and the digits indicate the number of channels. Figure 8(b) exemplifies segmentation with high saturation. RGB and Late-fused Convolution models demonstrate close similarity. Although it can be observed that only the Late-Fused Convolution model is able to accurately segment the entire obstacle (building in the right background). It can often be seen that when an obstacle is in the far background (Fig. 8(b), (e) and (f), the RGB model is only able to segment a small part of the obstacle and the Channel-stacking model completely fails to detect it. Figure 8(c) is an example of where both RGB and Channel-stacking models fail to segment a challenging transition from grass to vegetation. In this example, the RGB model also shows difficulty in accurately segmenting the trail class, especially in the areas that have tall grass. The Channel-stacking model is able to detect the entire trail, however it is unable to accurately detect the grass-vegetation transition and it shows false positives in the obstacle class. The Late-fused Convolution model is able to accurately detect such challenging transitions with a low false positive rate. [] Fig. 8. Qualitative comparison of segmentation from the unimodal RGB model and our two fusion strategies. Our late-fused convolution model consistently yields unparalleled performance even in conditions such as snow, low-lighting, glare and motion blur. The examples shown in Figs. 8(d), (e) and (f) illustrate adverse conditions such as glare, motion blur and snow. Figure 8(d) and (e) show an example of a scene with glare directly on the optics, which is common scenario for robots operating in real-world outdoor environments. Both the RGB and the Channel-stacking models are unable to accurately segment the classes in the presence of these disturbances. Figure 8(e) shows a similar scenario with motion blur and in the obstacles. Figure 8(f) is characterized by the presence of snow on the ground. RGB and Channel-stacking models misclassify snow as a part of the trail class and fail to detect the obstacles. The Late-fused Convolution model on the other hand, demonstrates invariance to glare and snow. This highlights the advantage of this approach, as it fuses feature maps further down the network, it is likely to make less mistakes and learn complementary features. In Channel-stacking, if there is a discrepancy in the features learned it cannot be corrected as the multimodal and multispectral features are learned together from the beginning. In addition to these experiments, a live demo can be accessed at http://​deepscene.​cs.​uni-freiburg.​de/​#demo, where a user can upload any image of an unstructured forest environment for segmentation or choose a random example from the benchmark. 6 Conclusions In this paper, we presented a DCNN architecture for semantic segmentation of outdoor environments. Our network outperforms several state-of-the-art architectures with near real-time performance which is critical for robotic applications. We extensively evaluated the benefits and drawbacks of deep early and late-fusion architectures for dense pixel-wise segmentation using multiple modalities and spectra. Our late-fused convolution technique exceeds channel stacking by achieving the lowest false detection rate. We additionally trained models on an extended version of our dataset containing images captured in adverse weather conditions such as snow, low-lighting, glare and motion blur. We showed that our networks learn to leverage features from complementary modalities and spectra to yield robust segmentation in the presence of these disturbances. Furthermore, we qualitatively demonstrated the benefits of multispectral fusion in several adverse conditions. The results demonstrate that fusing the NIR wavelength with RGB to obtain yields a more robust segmentation in unstructured outdoor environments. We publicly released a first-of-a-kind multispectral and multimodal semantic segmentation benchmark to accelerate further research on deep fusion. References 1. Badrinarayanan, V., et al.: SegNet: a deep convolutional encoder-decoder architecture for image segmentation. arXiv preprint (2015). arXiv:​1511.​00561 2. Bradley, D.M., et al.: Vegetation detection for mobile robot navigation. Technical report CMU-RI-TR-05-12, Carnegie Mellon University (2004) 3. Eitel, A., et al.: Multimodal deep learning for robust RGB-D object recognition. In: International Conference on Intelligent Robots and Systems (2015) 4. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis. Comm. ACM 24(6), 381–395 (1981)MathSciNetCrossRef 5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv preprint (2015). arXiv:​1512.​03385 6. Hirschmüller, H.: Accurate and efficient stereo processing by semi-global matching and mutual information. In: CVPR (2005) 7. Huete, A., Justice, C.O., van Leeuwen, W.J.D.: MODIS vegetation index (MOD 13), Algorithm Theoretical Basis Document (ATBD), Version 3.0, p. 129 (1999) 8. Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv preprint (2014). arXiv:​1408.​5093 9. Liu, F., Shen, C., Lin, G.: Deep convolutional neural fields for depth estimation from a single image (2014). arXiv:​1411.​6387 10. Liu, W., et al.: ParseNet: looking wider to see better. preprint (2015). arXiv:​1506.​04579 11. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, November 2015 12. Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef 13. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: MICCAI (2015) 14. Oliveira, G.L., Burgard, W., Brox, T.: Efficient deep methods for monocular road segmentation. In: International Conference on Intelligent Robots and Systems (2016) 15. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015) 16. Schwarz, M., Schulz, H., Behnke, S.: RGB-D object recognition and pose estimation based on pre-trained convolutional neural network features. In: ICRA (2015) 17. Sermanet, P., et al.: Overfeat: integrated recognition, localization and detection using convolutional networks. arXiv preprint (2013). arXiv:​1312.​6229 18. Socher, R., et al.: Convolutional-recursive deep learning for 3D object classification. In: NIPS, vol. 25 (2012) 19. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv:​1409.​1556 20. Thrun, S., Montemerlo, M., Dahlkamp, H., et al.: Stanley: the robot that won the DARPA grand challenge. JFR 23(9), 661–692 (2006) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_42 Vision-Based Apple Counting and Yield Estimation Pravakar Roy¹   and Volkan Isler¹   (1) Department of Computer Science, University of Minnesota, Minneapolis, Minnesota, USA     Pravakar Roy (Corresponding author) Email: proy@cs.umn.edu   Volkan Isler Email: isler@cs.umn.edu Abstract We present a novel method for yield estimation in apple orchards. Our method takes segmented and registered images of apple clusters as input. It outputs number and location of individual apples in each cluster. Our primary technical contributions are a representation based on a mixture of Gaussians, and a novel selection criterion to choose the number of components in the mixture. The method is experimentally verified on four different datasets using images acquired by a vision platform mounted on an aerial robot, a ground vehicle and a hand-held device. The accuracy of the counting algorithm itself is [$$91\%$$]. It achieves 81–85% accuracy coupled with segmentation and registration which is significantly higher than existing image based methods. 1 Introduction For specialty farms such as apple orchards, crucial decisions such as choosing the number of pickers, the number of storage bins and early season sales contracts depend on estimating fruit count. In this paper, we present a method for counting apples from images of an orchard row. The images can be captured from various types of robotic and non-robotic platforms such as ground or aerial robots or a person carrying a hand-held camera. Counting apples from a running sequence of images is difficult for three main reasons. First, there is a trade-off between image footprint and the size of individual apples in the image. If we use close-up views, coverage becomes tedious. On the other hand, if we use a wide-angle camera, individual apples occupy only a small area in images. Second, apples can be found in arbitrarily shaped clusters in which almost all apples overlap with each other. Segmenting individual apples from such cluster images is challenging. Third, because of occlusions due to leaves and branches as well as specularities sometimes apples are not detectable at all. Surprisingly, most robotic yield estimation systems do not focus on accurate counting. Instead, these systems rely on developing a consistent estimator of actual fruit count. These methods include Wang et al. [1] who used morphological operations and ellipse fitting for counting. Their method does not handle more than two apples in a cluster. Hung et al. [2] relied on circular Hough transformation to extract individual apples. Linker et al. [3] presented a method based on fruit edge detection. This method requires very high resolution images. All these techniques work for simple instances such as those shown in Fig. 1b. In more challenging, yet common instances (see Fig. 1c), both Hough transform and ellipse fitting will fail. To identify the number of apples correctly and to locate individual apples from these images we need novel models and techniques. In this paper, we present two such methods: one for counting the apples based on a classic clustering technique - Gaussian Mixture Models (GMM) [4] and another using an intuitive greedy circle fitting algorithm. The clustering technique is robust and accurate. The greedy method is identical to the state of the art techniques such as circular Hough transform and ellipse fitting. We use it as a baseline for comparison with the performance of the GMM method. Our method takes as input segmented apple images where non-apple pixels have been eliminated. We also assume that the images have been registered and apple clusters have been aligned across images. The details of segmentation and registration of apples across multiple images can be found in our earlier work [5, 6]. In this paper, we focus on the following problem: Given a set of segmented registered apple images, count the number of individual fruits in the images and identify their locations. We start with our technical approach in the next section. [] Fig. 1. A sample image and few example apple clusters. Figure (b) shows two example single apples. These apples can be detected easily by state of the art methods such as circular Hough transform or ellipse fitting. It is hard to generalize these methods for more complex clusters shown in figure (c). Figure (d) shows a sample output from the greedy circle fitting method. Figure (e) shows the output of GMM method on the same input. 2 Technical Approach Given a set of segmented and registered input images, we want to find the number and location of all the apples in every image. In this section, we propose two different methods for solving this problem: first using Gaussian Mixture Models (GMM) and second based on greedy circle fitting. The greedy circle fitting approach provides a baseline for comparison. As the first step for both methods, we perform a connected component analysis on the input image and detect all the individual clusters. We compute the location and radius of each apple in each cluster. As the clusters are disjoint the total number of apples can be obtained by simply summing up the numbers across all clusters. 2.1 Gaussian Mixture Model Method In the GMM method each apple is modelled by a Gaussian probability distribution function (pdf) and apple clusters are modelled as mixture of Gaussians. We start by converting the input cluster image I to binary. Let this binary image be denoted by [$$I\_{b}$$]. The locations of the non-zero pixels in the binary image are used as input to GMM. Let X represent the set of apples we are trying to find. Then, we can convert our problem to a Gaussian mixture model formulation in the following way: [$$\begin{aligned} P(I\_b|X) = G^k(\phi ,\mu ,\varSigma ) = \sum \_{i = 1}^k \phi \_{i} G\_{i}(\mu \_{i},\varSigma \_{i}) \end{aligned}$$] (1) Here, [$$G^k(\phi ,\mu ,\varSigma )$$] is a Gaussian mixture model with k components, and [$$G\_{i}$$] is the i th component of the mixture. [$$\mu \_{i}$$] and [$$\varSigma \_{i}$$] are the mean and covariance of the [$$i^{th}$$] component. The covariance matrix [$$\varSigma \_{i} = \left[ \sigma \_{x\_{i}}^2,\sigma \_{y\_{i}}^2\right] $$] is diagonal. [$$\phi \_{i}$$] is the weight of the [$$i^{th}$$] component where [$$\sum \_{i= 1}^k \phi \_{i} = 1$$] and [$$0\le \phi \_{i}\le 1$$]. Given model parameters [$$\theta = \{ \phi , \mu , \varSigma \}$$], the problem of finding location of the center of the apples and their pixel diameters can be formulated as computing the world model which maximizes [$$P(I\_b|X)$$]. Each component [$$G\_{i}(\mu \_{i},\varSigma \_{i})$$] of the mixture model represents an apple with center at [$$\mu \_{i}$$], equatorial radius [$$2\sigma \_{x\_{i}}$$] and axial radius [$$2\sigma \_{y\_{i}}$$]. A common technique to solve for [$$\arg \max P(I\_b|X)$$] is the expectation maximization (EM) algorithm [4]. As is well-known, EM provides us a local greedy solution for the problem. Since EM is susceptible to local maxima, initialization is very important. We used MATLAB’s [7] implementation of K-means [8] (which uses randomly-selected seeds to avoid local maxima) for initialization of EM. Selecting the Number of Components: In our problem formulation, the number of components k is the total number of apples in image I. EM enables us to find the optimal location of the apples given the total number of apples k. Our main technical contribution in this paper is a method to calculate the correct k. Let the correct number of apples in the input image be denoted by [$$\kappa $$]. We tried different state of the art techniques (Akaike Information Criterion (AIC) [9], Minimum Description Length (MDL) [9] etc.) for finding [$$\kappa $$]. None of them worked out of the box for our purposes (Fig. 2). Therefore, we propose a new heuristic approach for evaluating mixture models with different number of components based on MDL. Unlike classic MDL based approaches we have both reward and penalty. [] Fig. 2. Popular methods like AIC/BIC do not work out of the box for our purposes. These criteria have a tendency of choosing higher values of k. In this synthetic image, we have only five circles but the AIC based criterion value (Bar plot in the middle) was lowest for [$$k = 6$$] and consequently it finds eight apples. Let [$$\sigma \_{min} = min(\sigma \_{x\_{i}},\sigma \_{y\_{i}})$$] and [$$\sigma \_{max} = max(\sigma \_{x\_{i}},\sigma \_{y\_{i}})$$]. Using the mean and covariances of the i th component we define a 2D Gaussian kernel [$$\mathcal {G}(\mu \_{i},\sigma \_{max})$$] where [$$\sigma \_{max}$$] is the variance. Let [$$P(\mu \_{i})$$] denote the response of the kernel when placed at the center [$$\mu \_{i}$$] in the original input image I and [$$C\_{i}$$] denote the total number of pixels clustered by [$$G\_{i}(\mu \_i,\varSigma \_i)$$]. For each component [$$G\_{i}(\mu \_i,\varSigma \_i)$$], of the mixture model [$$G^k(\phi ,\mu ,\varSigma )$$] we define the reward [$$R\_{i}$$] in the following way, [$$\begin{aligned} R\_i(G\_{i}) = \phi \_{i}\left[ P(\mu \_{i})+ P(\mu \_{i})\left( \frac{\sigma \_{min}}{\sigma \_{max}}\right) ^2 + P(\mu \_{i})\frac{C\_{i}}{\pi \sigma \_{max}\sigma \_{min}} - \frac{1}{3}\left( \pi \sigma \_{x\_{i}}\sigma \_{y\_{i}} -C\_{i} \right) \right] \end{aligned}$$] (2) For most of the images we only capture the frontal views of the apples, which can be easily approximated by circles lying on a plane. All four terms in Eq. (2) reward specific spatial characteristics of the Gaussian pdf related to this fact. [$$P(\mu \_{i})$$] represents the strength of the distribution in terms of pixel values and is present in first three terms. The second term rewards circular shaped distributions using the eccentricity of the pdf. As the eccentricity [$$\epsilon = \sqrt{1- \frac{\sigma \_{min}^2}{\sigma \_{max}^2}}$$], and for circles [$$\epsilon $$] is zero, we use [$$1-\epsilon ^2 = \left( \frac{\sigma \_{min}}{\sigma \_{max}}\right) ^2$$] as the rewarding factor. The third term rewards coverage. The fourth term penalizes Gaussian pdfs covering large area and clustering very few points. Now if we find out the reward [$$R\_i(G\_{i}(\mu \_i,\varSigma \_i))$$] for all the components k, the total reward for the mixture model [$$G^k(\phi ,\mu ,\varSigma )$$] can be computed by summing them together. Next, we define the penalty term. The traditional MDL penalty term is [$$U = c p \log (|Y|)$$] where p is the number of parameters in the model, |Y| is the total size of the input data, and [$$c = \frac{1}{2}$$] is a constant. Based on this principle, our penalty term is [$$V(G^k(\phi ,\mu ,\varSigma ))$$] is defined as the following [$$\begin{aligned} V(G^k(\phi ,\mu ,\varSigma )) = c'(3k) \log (\sum \_{{\varvec{x}}}(I\_b({\varvec{x}}) \ne 0))) \end{aligned}$$] (3) where x represents the pixel index across the image [$$I\_b$$]. Compared to traditional MDL based penalty we have the constant [$$c' =\frac{3}{2}$$] instead of [$$c =\frac{1}{2}$$]. This is attributed to the fact that the reward expression (2) has three terms compared to one. The number of components k is multiplied by three as each Gaussian has three parameters [$$\left[ \mu \_i,\sigma \_{x\_i}, \sigma \_{y\_i}\right] $$]. With these terms defined, we choose the correct number of components [$$\kappa $$] in the following way: [$$\begin{aligned} \kappa = \mathop {{{\mathrm{arg\,max}}}}\limits \_k R(G^k(\phi ,\mu ,\varSigma ))- V(G^k(\phi ,\mu ,\varSigma )) \end{aligned}$$] (4) [] Fig. 3. A synthetic example consisting of six random circles and plots illustrating how the number of components are selected. Figure (a) shows how the pdfs cover the circles for different values of k. Figure (b)–(d)shows how the score calculated from the rewards and penalties following the right hand side of Eq. (4). The plot shows that the score is maximum for [$$k = 6$$] which is indeed the correct number of components. To have a better understanding of the selection procedure, we demonstrate a synthetic example at Fig. 3(a). From Fig. 3(b), it is evident that except for [$$k =6$$], other mixtures have low circularity. The coverage rewards and pixel density components increase with k and converges to a steady state. While the penalty from minimum description length principle increases with k, generally the penalty for coverage decrease with k. In this example, the crucial factors in determining the score are circularity and the coverage penalty. For [$$k = 6$$] circularity is at the peak and coverage penalty is lowest and consequently the score was maximum for [$$k = 6$$]. The plot of the corresponding rewards, penalties and final scores are shown in Fig. 3(b)–(d). We show sample results from our datasets in Fig. 4. [] Fig. 4. A sample output from GMM on real data. 2.2 Greedy Circle Fitting Method The GMM method uses a global fitting criterion to segment the apples. In this section, we present a greedy approach which achieves reasonable performance efficiently. In case of images containing single apple, with rough knowledge of apple radius in terms of pixels, we can create Gaussian kernels of different sizes within the known bounds, convolve the entire image with them and find the maximum response location to find the center of an apple. This concept can be extended in case of images containing multiple apples. The first apple is chosen as the apple with maximum kernel response. Likewise, the second one can be computed by choosing another apple, which combined with the previously chosen one maximizes the total response of both. In this method, the order in which the apples are selected defines an occlusion hierarchy. The response of the combination multiple Gaussian kernels are computed using this hierarchy. Figure 1(d) shows a detected apples on an input image from the greedy method. It is unable to detect the partially visible apple. Figure 1(e) shows the output of GMM method on the same input. It successfully finds the apple missed by greedy method. 3 Experimental Results We have evaluated the performance of our counting algorithm on four different datasets. These datasets are obtained from different platforms. The first two datasets are obtained from hand-held cameras, the third dataset from a camera mounted on a ground vehicle and the fourth dataset from a camera mounted on a flying UAV. Figure 5 shows sample images from all four datasets. [] Fig. 5. Sample images from used datasets. The two images on the left are obtained from hand-held cameras. The third image from left is obtained from a camera mounted on a ground vehicle and the rightmost image is obtained from camera mounted on a UAV. We first evaluate the GMM method with the segmented apples clusters from Dataset 3 (Fig. 1(b), (c)). We used 442 such images for this purpose, all of which were hand counted. We identified clusters of different sizes (Fig. 6(a)) and evaluated the performance of our algorithm in terms of accuracy, over counting and under counting per cluster size (Fig. 6(b)). The over all accuracy for the entire dataset is [$$91.30\%$$]. For clusters containing six or more apples, the percentage of under counting goes up to [$$33.33\%$$]. The number of instances of such large sized clusters were very low to make any strong inference about the performance of the algorithm. We need further investigation to analyze the performance of the algorithm in these cases. For single apples, we found [$$10\%$$] over counting. This problem can be attributed to occlusion and specularities which make the algorithm think that there are two apples instead of one. [] Fig. 6. Performance evaluation. Next, we evaluate the gross performance of our counting pipeline using the Dataset 1 2 and 4. The gross results for Dataset 3 were not available at the time of submission. Dataset 1 contains a mixture of red and green apples. The total number of images in this dataset is 464. It covers a block in an orchard row containing six trees. Dataset 2 primarily contains red apples. The total number of images in this dataset is 964. It covers a full row in an orchard. Dataset 4 was collected using an UAV. It contains 655 images and covers six trees. The total number of hand counted apples from the images in the datasets are 258 and 952, and 673 respectively. The accuracy of GMM counting method for first and second and fourth dataset is [$$81.3492\%$$], [$$85.87\%$$] and [$$84.72\%$$]. The drop in accuracy for the first dataset is mainly attributed to our segmentation method [5] where many green apples were not detected. Figure 6(c) shows a bar chart of the accuracy of our counting methods. We show the accuracy from the greedy method for comparison. For the greedy counting method the accuracy drops down to [$$69.44\%$$] for the first dataset and [$$76.34\%$$] for the second dataset. This drop is expected as the greedy method uses a hard threshold for grouping apple pixels. It fails when the scale of the image varies a lot or apples are only partially visible. 4 Experimental Insights and Conclusion In this paper, we presented a method for counting apples from segmented and registered input images. While the results are encouraging, there is still room for improvement. Specifically, since the results presented here constitute the final part of a three step pipeline, it is important for the previous steps to be correct. For example, in Fig. 7(a), the counting procedure did not count any of the green apples. The reason behind this is that they were not detected by our color based segmentation method. The algorithm also fails to predict the correct number of fruits when there is too much overlap and boundaries among different fruits are not detectable. To get better insight into this problem, we created a synthetic input where the algorithm fails (Fig. 8). Here, the predicted number of components by the algorithm is five but the actual number is six. A closer look at the solution from EM algorithm for [$$k = 6$$] shows that, actually EM failed to find out the correct location and pixel radius of fruits which resulted in large coverage penalty and low circularity. These types of problems are present in natural conditions too. One such observation is shown on Fig. 7(b). For these types of scenarios, it is hard to predict the correct number of apples from a single view. A natural way to resolve these ambiguities is to use active vision (look at the cluster from another viewpoint). In future, we would like to use the information obtained from single views, to move actively in more suitable positions and fuse all the previously obtained information to predict the correct number and locations of fruits. [] Fig. 7. A few failure cases in real data. [] Fig. 8. A synthetic input where the algorithm fails to compute k correctly. If we look closely at the solution found by EM for the correct k, we find that EM failed to find the correct location which resulted in large coverage penalty and low circularity. Acknowledgments This work is supported in part by NSF grant # 1317788, USDA NIFA MIN-98-G02 and the MnDrive initiative. References 1. Wang, Q., Nuske, S., Bergerman, M., Singh, S.: Automated crop yield estimation for apple orchards. In: Desai, J.P., Dudek, G., Khatib, O., Kumar, V. (eds.) Experimental Robotics. STAR, vol. 88, pp. 745–758. Springer, Heidelberg (2013)CrossRef 2. Hung, C., Underwood, J., Nieto, J., Sukkarieh, S.: A feature learning based approach for automated fruit yield estimation. In: Mejias, L., Corke, P., Roberts, J. (eds.) Field and Service Robotics. STAR, vol. 105, pp. 485–498. Springer, Heidelberg (2015). doi:10.​1007/​978-3-319-07488-7\_​33 3. Linker, R., Cohen, O., Naor, A.: Determination of the number of green apples in RGB images recorded in orchards. Comput. Electron. Agric. 81, 45–57 (2012)CrossRef 4. Bilmes, J.A., et al.: A gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and hidden Markov models. International Computer Science Institute, vol. 4(510), p. 126 (1998) 5. Roy, P., Stefas, N., Peng, C., Bayram, H., Tokekar, P., Isler, V.: Robotic surveying of apple orchards, University of Minnesota, Department of Computer Science, Technical report, June 2015 6. Roy, P., Isler, V.: Surveying apple orchards with a monocular vision sysem. In: 2016 IEEE International Conference on Automation Science and Engineering (CASE), pp. 1–6, August 2016 7. The MathWorks, Inc.: Matlab and statistics toolbox release 2015b (2015) 8. Hartigan, J.A., Wong, M.A.: Algorithm as 136: a k-means clustering algorithm. J. R. Stat. Soc. Ser. C (Appl. Stat.) 28(1), 100–108 (1979)MATH 9. Grunwald, P.: A tutorial introduction to the minimum description length principle. arXiv preprint math/0406077 (2004) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_43 Towards Learning to Perceive and Reason About Liquids Connor Schenck¹   and Dieter Fox¹ (1) University of Washington, Seattle, USA     Connor Schenck Email: schenck@cs.washington.edu Abstract Recent advances in AI and robotics have claimed many incredible results with deep learning, yet no work to date has applied deep learning to the problem of liquid perception and reasoning. In this paper, we apply fully-convolutional deep neural networks to the tasks of detecting and tracking liquids. We evaluate three models: a single-frame network, multi-frame network, and a LSTM recurrent network. Our results show that the best liquid detection results are achieved when aggregating data over multiple frames and that the LSTM network outperforms the other two in both tasks. This suggests that LSTM-based neural networks have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers. Keywords Robot perceptionDeep learningLiquidsManipulation 1 Introduction To robustly handle liquids, such as pouring a certain amount of water into a bowl, a robot must be able to perceive and reason about liquids in a way that allows for closed-loop control. Liquids present many challenges compared to solid objects. For example, liquids can not be interacted with directly by a robot, instead the robot must use a tool or container; often containers containing some amount of liquid are opaque, obstructing the robot’s view of the liquid and forcing it to remember the liquid in the container, rather than re-perceiving it at each timestep; and finally liquids are frequently transparent, making simply distinguishing them from the background a difficult task. Taken together, these challenges make perceiving and manipulating liquids highly non-trivial. Recent advances in deep learning have enabled a leap in performance not only on visual recognition tasks, but also in areas ranging from playing Atari games [1] to end-to-end policy training in robotics [2]. In this paper, we investigate how deep learning techniques can be used for perceiving liquids during pouring tasks. We develop a method for generating large amounts of labeled pouring data for training and testing using a realistic liquid simulation and rendering engine, which we use to generate a data set with 10,122 pouring sequences, each 15 s long, for a total of 2,531 min of video or over 4.5 million labeled images. Using this dataset, we evaluate multiple deep learning network architectures on the tasks of detecting liquid in an image and tracking the location of liquid even when occluded. Our results show that deep networks able to detect and track liquid in a simulated environment with a reasonable degree of robustness. We also have preliminary results that show that these networks perform well in real environments. 2 Related Work To the best of our knowledge, no prior work has investigated directly perceiving and reasoning about liquids. Existing work relating to liquids either uses coarse simulations that are disconnected to real liquid perception and dynamics [3, 4] or constrained task spaces that bypass the need to perceive or reason directly about liquids [5–9]. While some of this work has dealt with pouring, none of it has attempted to directly perceive liquids from raw sensory data. In contrast, in this work we directly approach this problem. Similarly, Rankin et al. [10, 11] investigated ways to detect pools of water from an unmanned ground vehicle navigating rough terrain. They detected water based on simple color features or sky reflections, and didn’t reason about the dynamics of the water, instead treating it as a static obstacle. Griffith et al. [12] learned to categorize objects based on their interactions with running water, although the robot did not detect or reason about the water itself, rather it used the water as a means to learn about the objects. In contrast to [12], we use vision to directly detect the liquid itself, and unlike [10, 11], we treat the liquid as dynamic and reason about it. In order to perceive liquids at the pixel level, we make use of fully-convolutional neural networks (FCN). FCNs have been successfully applied to the task of image segmentation in the past [13–15] and are a natural fit for pixel-wise classification. In addition to FCNs, we utilize long short-term memory (LSTM) [16] recurrent cells to reason about the temporal evolution of liquids. LSTMs are preferable over more standard recurrent networks for long-term memory as they overcome many of the numerical issues during training such as exploding gradients [17]. LSTM-based CNNs have been successfully applied to many temporal memory tasks by previous work [15, 18], and in fact LSTMs have even been combined with FCNs by replacing the standard fully-connected layers of their LSTMs with [$$1\times 1$$] convolution layers [15]. We use a similar method in this paper. 3 Methodology In order to train neural networks to perceive and reason about liquids, we must first have labeled data to train on. Getting pixel-wise labels for real-world data can be difficult, so in this paper we opt to use a realistic liquid simulator. In this way we can acquire ground truth pixel labels while generating images that appear as realistic as possible. We train three different types of convolutional neural networks (CNNs) on this generated data to detect and track the liquid: single-frame CNN, multi-frame CNN, and LSTM-CNN. [] Fig. 1. The setup used to simulate and render liquid sequences. The objects are shown here textureless for clarity. The sphere surrounding all the objects has been cut away to allow viewing of the objects inside. The orange shape represents the camera’s viewpoint, and the flat plane across the table from it is the plane on which the video sequence is rendered. Note that this plane is sized to exactly fill the camera’s view frustum. The background sphere is not directly visible by the camera and is used primarily to compute realistic reflections. 3.1 Data Generation We generate data using the 3D-modeling application Blender [19] and the library El’Beem for liquid simulation, which is based on the lattice-Boltzmann method for efficient, physically accurate liquid simulations [20]. We separate the data generation process into two steps: simulation and rendering. During simulation, the liquid simulator calculates the trajectory of the surface mesh of the liquid as the cup pours the liquid into the bowl. We vary 4 variables during simulation: the type of cup (cup, bottle, mug), the type of bowl (bowl, dog dish, fruit bowl), the initial amount of liquid (30 % full, 60 % full, 90 % full), and the pouring trajectory (slow, fast, partial), for a total of 81 simulations. Each simulation lasts exactly 15 s for a total of 450 frames (30 frames per second). [] Fig. 2. Examples of frames rendered by our data generation algorithm. The left column is the raw RGB images generated by the renderer; the center-left column shows the ground truth liquid location for detection; the center-right column shows the ground truth liquid location for tracking; the right column shows the ground truth labeling output by the simulator. Next we render each simulation. We separate simulation from rendering because it allows us to vary other variables that don’t affect the trajectory of the liquid mesh (e.g., camera viewpoint), which provides a significant speedup as liquid simulation is much more computationally intensive than rendering. In order to approximate realistic reflections, we mapped a 3D photo sphere image taken in our lab to the inside of a sphere, which we place in the scene surrounding all the objects. To prevent overfitting to a static background, we also add a plane in the image in front of the camera and behind the objects that plays a video of activity in our lab that approximately matches with that location in the background sphere. This setup is shown in Fig. 1. The liquid is always rendered as 100 % transparent, with only reflections, refractions, and specularities differentiating it from the background. For each simulation, we vary 6 variables: camera viewpoint (48 preset viewpoints), background video (8 videos), cup and bowl textures (6 textures each), liquid reflectivity (normal, none), and liquid index-of-refraction (air-like, low-water, normal-water). The 48 camera viewpoints were generated by varying the camera elevation (level with the table and looking down at a 45 [$$^\circ $$] angle), camera distance (8 m, 10 m, and 12 m), and the camera azimuth (the 8 points of the compass, with north, southwest, and south shown in the top, middle, and bottoms rows of Fig. 2) respectively. We also generate negative examples without liquid. In total, this yields 165,888 possible renders for each simulation. It is infeasible to render them all, so we randomly sample variable values to render. The labels are generated for each object (liquid, cup, bowl) as follows. First, all other objects in the scene are set to render as invisible. Next, the material for the object is set to render as a specific, solid color, ignoring lighting. The sequence is then rendered, yielding a class label for the object for each pixel. An example of labeled data (right column) and its corresponding rendered image (left column) is shown in Fig. 2. The cup, bowl, and liquid are rendered as red, green and blue respectively. Note that this method allows each pixel to have multiple labels, e.g., some of the pixels in the cup are labeled as both cup and liquid (magenta in the right column of Fig. 2). To determine which of the objects, if any, is visible at each pixel, we render the sequence once more with all objects set to render as their respective colors, and we use the alpha channel in the ground truth images to encode the visible class label. To evaluate our learning architectures, we generated 10,122 pouring sequences by randomly selecting render variables as described above as well as generating negative sequences (i.e., sequences without any water), for a total of 4,554,900 training images. Both the model files generated by Blender and the rendered images for the entire dataset are available for download at the following link: http://​rse-lab.​cs.​washington.​edu/​lpd/​. 3.2 Network Architecture We test three network layouts for the tasks of detecting and tracking liquids: CNN, MF-CNN, and LSTM-CNN. All of our networks are fully-convolutional [13], that is, there are no fully-connected layers. In place of fully-connected layers used in more standard CNNs, we use [$$1 \times 1$$] convolutional layers, which have a similar effect but prevent the explosion of parameters that normally occurs. We use the Caffe deep learning framework [21] to implement our networks¹. [] Fig. 3. Layout of the single-frame and LSTM networks. Each of the red Convolution layers is followed by a max pooling layer and a rectified linear layer. The max pooling layers have the same kernel size as the convolution layer they follow, and the first two convolution layers in each network have a stride of 2 (all others and all convolution layers have a stride of 1). Each of the blue 1 [$$\times $$] 1 Convolution layers is followed by a rectified linear layer. Refer to Fig. 1 of [17] for more details on the LSTM layer. CNN. The first layout is a standard convolutional neural network (CNN). It takes in an image and outputs probabilities for each class label at each pixel. It has a fixed number of convolutional layers, each followed by a rectified linear layer and a max pooling layer. In place of fully-connected layers, we use two [$$1 \times 1$$] convolutional layers, each followed by a rectified linear layer. The last layer of the network is a deconvolutional layer that upsamples the output of the [$$1 \times 1$$] convolutional layers to be the same size as the input image. This network is shown in Fig. 3a. MF-CNN. The second layout is a multi-frame CNN. Instead of taking in a single frame, it takes as input multiple consecutive frames and predicts the probability of each class label for each pixel at the last frame. It is similar to the single-frame CNN network shown in Fig. 3a except each frame is convolved independently through the first 5 convolution layers, and then the output for each frame is concatenated together channel-wise. This is fed to the two [$$1 \times 1$$] convolutional layers, each followed by a rectified linear layer, and finally a deconvolutional layer. We fix the number of input frames for this layout to 32 for this paper, i.e., approximately 1 s worth of data (30 frames per second), which we empirically determined strikes the best balance between window size and memory utilization. LSTM-CNN. The third layout is similar to the single frame CNN layout, with the first [$$1 \times 1$$] convolutional layer replaced with a LSTM layer (see Fig. 1 of [17] for a detailed layout of the LSTM layer). We replace the fully-connected layers of a standard LSTM with [$$1 \times 1$$] convolutional layers. The LSTM takes as recurrent input the cell state from the previous timestep, its output from the previous timestep, and the output of the network from the previous timestep processed through 3 convolutional layers (each followed by a rectified linear and max pooling layer). During training, when unrolling the LSTM-CNN, we initialize this last recurrent input with the ground truth at the first timestep, but during testing we use the standard recurrent network technique of initializing it with all zeros. Figure 3b shows the layout of the LSTM-CNN. 4 Evaluation We evaluated our networks on 4 experiments: fixed-viewpoint detection, multi-viewpoint detection, fixed-viewpoint tracking, and combined detection & tracking. We define the detection task as, given raw color images, determine where the visible liquid in the images is. We define the tracking task as, given segmented images (i.e., images that have already been run through a detector), determine where all liquid (visible and occluded) is in the image. Intuitively, detection corresponds to perceiving the liquid, while tracking corresponds to reasoning about where the liquid is given what is (and has been) visible. Every network was trained using the mini-batch gradient descent method Adam [22] with a learning rate of 0.0001 and default momentum values. Each network was trained for 61,000 iterations, at which point performance tended to plateau. All single-frame networks were trained using a batch size of 32; all multi-frame networks with a window of 32 and batch size of 1; and all LSTM networks with a batch size of 5. For all experiments except the third (fixed-viewpoint tracking), the input images were scaled to [$$400 \times 300$$] resolution. The error signal was computed using the softmax with loss layer built into Caffe [21]. We empirically determined, however, that naively training a network in this setup results in it predicting no liquid present in any scene at all due to the significant positive-negative class imbalance (most of the pixels in each image are non-liquid pixels). To counteract this we employed two strategies. The first was to pre-train the network on [$$160 \times 160$$] crops of the image around liquid pixels. Since our networks are fully-convolutional, they can have variable sized inputs and outputs, which means a network pre-trained in this manner can be immediately trained on full images without needing any modification. The second strategy was to weight the gradients from the error signal based on the class of the ground truth pixel: 1.0 for positive pixels and 0.1 for negative pixels. This decreases the effect of the non-liquid pixels and prevents the network from predicting no liquid in the scene. We report the precision and recall of each network on a hold-out test set, evaluated on pixel-wise classifications. We also report the precision and recall for various amounts of “slack,” i.e., we count a pixel labeled as liquid correct if it is within n pixels of a ground truth liquid pixel, where n is the amount of slack. This better evaluates the network in cases where it’s predictions are only a few pixels off, which is a relatively small error given the resolution of the images. 4.1 Experiment 1: Fixed-Viewpoint Detection We evaluated all three network types on a fixed-viewpoint detection task. We define fixed-viewpoint in this context to mean data generated as described in Sect. 3.1 for which the camera elevation is level with the table and the azimuth is either north (as shown in the top row of Fig. 2) or south (180 [$$^\circ $$] opposite). The networks were given the full rendered RGB image as input (similar to the left column in Fig. 2) and the output was a classification at each pixel as liquid or not liquid. To counteract the class imbalance, we employed visible liquid image crop pre-training for each network (we initialized the image crop LSTM-CNN with the trained weights of the image crop single-frame CNN). We then trained the final network for each type on full images initializing it with the weights of the image crop network. During training, the LSTM-CNN was unrolled for 32 timesteps. 4.2 Experiment 2: Multi-Viewpoint Detection For the second experiment, we expanded the data used to include all 48 viewpoints, presenting a non-trivial increase in difficulty for the networks. Our goal was to test the generalizability of the networks across a much wider variation in viewpoints. For this reason, we focused only on testing the best performing network, the LSTM-CNN (see Sect. 5.1 for results from experiment 1). Also to test generalizability, we only trained the network on a subset of the 48 viewpoints, and tested on the remaining. We used all data generated using the 8 m and 12 m camera viewpoint distances for training and data generated using the 10 m camera distance for testing. We also employed the gradient weighting scheme described above to counteract the class imbalance. The LSTM-CNN was trained in the same manner as in experiment 1. 4.3 Experiment 3: Fixed-Viewpoint Tracking For tracking only, the networks were given pre-segmented input images, with the goal being to track the liquid when it is not visible. An example of this input is shown in the first row of the right column from Fig. 2, with the exception that the occluded liquid (magenta and cyan) were not shown. Because these input images are more structured, we lowered the resolution to [$$130 \times 100$$]. The output was the pixel-wise classification of liquid or not liquid, including pixels where the liquid was occluded by other objects in the scene. During training, the LSTM-CNN was unrolled for 160 timesteps. We reduced the number of initial convolution layers on the input from 5 to 3 for each of the three networks. Due to the structured nature of the input, each network was trained directly on full images with Gaussian-random weight initialization. We used the data from the same viewpoints (level with the table and azimuth at north or south) as in experiment 1. 4.4 Experiment 4: Combined Detection and Tracking For the last experiment, we combine detection and tracking into a single task, i.e., given raw color images, determine where all liquid in the scene is (visible and occluded). Our goal is to determine if it is possible to do both tasks with one network, and for this reason, we evaluate only the LSTM-CNN. We initialized the network with the weights of the trained LSTM-CNN from experiment 1 and trained it on full images. As in experiment 2, we employed the gradient weighting scheme described above to counteract the class imbalance. We used the data from the same viewpoints as in experiment 1 and 3. 5 Results 5.1 Fixed-Viewpoint Detection [] Fig. 4. Qualitative fixed-viewpoint liquid detection results. The Input column is the input to the networks, the Labels column is the ground truth labeling of each pixel as liquid or not liquid, and the CNN, MF-CNN, and LSTM-CNN columns show a heatmap of the prediction of each network for each of the input frames. 5 sequences were randomly selected from our training set, and the frame with the most liquid pixels was picked for display here, with the exception of the last row, which shows how the networks perform when there is no liquid present. [] Fig. 5. Quantitative fixed- and multi-viewpoint liquid detection results. The graphs indicate the precision and recall for each of the three networks on fixed-viewpoint detection and the LSTM-CNN on multi-viewpoint detection. The colored lines indicate the variation in the number of slack pixels we allowed for prediction, i.e., how many pixels a positive classification could be away from a positive ground truth labeling and still be counted as correct. Figure 4 shows qualitative results for the three networks on the liquid detection task². The frames in this figure were randomly selected from the training set, and it is clear from the figure that all three networks detect the liquid at least to some degree. Figures 5a to c show a quantitative comparison between the three networks. As expected, the multi-frame CNN outperforms the single-frame. Surprisingly, the LSTM-CNN performs much better than both by a significant margin. These results strongly suggest that detecting transparent liquid must be done over a series of frames, rather than a single frame. 5.2 Multi-Viewpoint Detection Figure 5d shows the results from multi-viewpoint detection for the LSTM-CNN. As expected, the 8-fold increase in number of viewpoints leads to lower performance as compared to Fig. 5c, but overall it is clearly still able to detect the liquid reasonably well. Interestingly, there is less spread between the various levels of slack than in Fig. 5c, meaning the network benefits less from increased slack, suggesting that it is less precise than the fixed-view LSTM-CNN, which makes sense given the much larger variation in viewpoints. 5.3 Fixed-Viewpoint Tracking For tracking, we evaluated the performance of the networks on locating both visible and invisible liquid, given segmented input (i.e., each pixel classified as liquid, cup, bowl, or background). Because the viewpoint was fixed level with the bowl, the only visible liquid the network was given was liquid as it passed from cup to bowl. Figures 6a to c show the performance of each of the three networks. As expected, the LSTM-CNN has the best performance. Interestingly, the multi-frame CNN performs better than expected, given that it only sees approximately 1 s worth of data and has no memory capability. [] Fig. 6. Fixed-viewpoint liquid tracking and combined detection & tracking results. Similar to Fig. 5, the graphs indicate the precision and recall for each of the three networks and the colored lines indicate the variation in the number of slack pixels we allowed for prediction. 5.4 Combined Detection and Tracking Figure 6d shows the results of combined detection and tracking for the LSTM-CNN. Given a raw color image, the network predicted where both the visible and occluded liquid was. Comparing this to the rest of Fig. 6, it is clear that the network was able to do quite well, despite using raw, unstructured input, unlike the other networks in that figure. This strongly suggests that LSTM-CNNs are best suited not only for detecting liquids, but also tracking them. 5.5 Preliminary Real Robot Results Figure 7b shows qualitative results of the LSTM-CNN trained on a small dataset collected on a real robot in our lab³. We used a thermal infrared camera calibrated to our RGB camera in combination with heated water to acquire ground truth labeling for data collected using a real robot. The advantage of this method is that heated water appears identical to room temperature water on a standard color camera, but is easily distinguishable on a thermal camera. This allows us to label the “hot” pixels as liquid and all other pixels as not liquid. Figure 7a shows our robot setup with the thermal and RGB cameras. It is clear from Fig. 7b that our methods, to at least a limited degree, apply to real world data and not just data generated by a liquid simulator. [] Fig. 7. The left figure shows our robot, it’s attached thermal and RGB cameras, and example output of each camera. The right figure shows the output of a LSTM-CNN trained on data collected on the real robot. The Input column is the input to the network, the Labels column is the ground truth labeling of each pixel as liquid or not liquid, and the LSTM-CNN column shows a heatmap of the prediction of the network for each of the input frames. 6 Discussion and Conclusion The results in Sect. 5 show that it is possible for deep learning to detect and track liquids in a scene, both independently and combined, and also over a wide variation in viewpoints. Unlike prior work on image segmentation, these results clearly show that single images are not sufficient to reliably perceive liquids. Intuitively, this makes sense, as a transparent liquid can only be perceived through its refractions, reflections, and specularities, which vary significantly from frame to frame, thus necessitating aggregating information over multiple frames. We also found that LSTM-based CNNs are best suited to not only aggregate this information, but also to track the liquid as it moves between containers. LSTMs work best, due to not only their ability to perform short term data integration (just like the MF-CNN), but also to remember states, which is crucial for tracking the presence of liquids even when they’re invisible. From the results shown in Fig. 4 and in the video⁴, it is clear that the LSTM CNN can at least roughly detect and track liquids. Nevertheless, unlike the task of image segmentation, our ultimate goal is not to perfectly estimate the potential location of liquids, but to perceive and reason about the liquid such that it is possible to manipulate it using raw sensory data. For this, a rough sense of where the liquid is in a scene and how it is moving might suffice. Neural networks, then, have the potential to be a key component for enabling robots to handle liquids using robust, closed-loop controllers. 7 Future Work We are currently on expanding the real robot results from Sect. 5.5. As stated in Sect. 3, it can be difficult to get the ground truth pixel labels for real data, which is why we chose to use a realistic liquid simulator in this paper. However, our method of combing a thermal camera with heated water to get the ground truth makes it feasible to apply the techniques in this paper to data collected on a real robot. For future work we plan to collect more data on the real robot using this technique and do a thorough analysis of the results. Another avenue of future work we are currently pursuing is extending these techniques to control problems. The results here clearly show that deep neural networks can effectively be used to detect and, to some extent at least, reason about liquids. The next logical step is to utilize neural networks to manipulate liquids via a robot. One potential algorithm to accomplish this is Guided Policy Search (GPS) [23], which learns a control policy for a task from raw sensory data. The advantage of an algorithm like GPS is that it works well on high-dimensional sensory input where collecting large amounts of data may be infeasible (as is often the case on a real robotic system). In future work we plan to apply a similar algorithm to the problem of robotic liquid control from raw sensory data. Acknowledgments This work was funded in part by the National Science Foundation under contract number NSF-NRI-1525251 and by the Intel Science and Technology Center for Pervasive Computing (ISTC-PC). References 1. Guo, X., Singh, S., Lee, H., Lewis, R.L., Wang, X.: Deep learning for real-time Atari game play using offline Monte-Carlo tree search planning. In: NIPS, pp. 3338–3346 (2014) 2. Levine, S., Finn, C., Darrell, T., Abbeel, P.: End-to-end training of deep visuomotor policies. arXiv preprint arxiv:​1504.​00702 (2015) 3. Kunze, L., Beetz, M.: Envisioning the qualitative effects of robot manipulation actions using simulation-based projections. Artif. Intell. (2015) 4. Yamaguchi, A., Atkeson, C.G.: Differential dynamic programming with temporally decomposed dynamics. In: Humanoids, pp. 696–703 (2015) 5. Langsfeld, J., Kaipa, K., Gentili, R., Reggia, J., Gupta, S.: Incorporating failure-to-success transitions in imitation learning for a dynamic pouring task. In: IROS Workshop on Compliant Manipulation (2014) 6. Okada, K., Kojima, M., Sagawa, Y., Ichino, T., Sato, K., Inaba, M.: Vision based behavior verification system of humanoid robot for daily environment tasks. In: Humanoids, pp. 7–12 (2006) 7. Tamosiunaite, M., Nemec, B., Ude, A., Wörgötter, F.: Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives. Rob. Auton. Syst. 59(11), 910–922 (2011)CrossRef 8. Cakmak, M., Thomaz, A.L.: Designing robot learners that ask good questions. In: HRI, pp. 17–24. ACM (2012) 9. Rozo, L., Jimenez, P., Torras, C.: Force-based robot learning of pouring skills using parametric hidden markov models. In: RoMoCo, pp. 227–232 (2013) 10. Rankin, A., Matthies, L.: Daytime water detection based on color variation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 215–221 (2010) 11. Rankin, A.L., Matthies, L.H., Bellutta, P.: Daytime water detection based on sky reflections. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5329–5336 (2011) 12. Griffith, S., Sukhoy, V., Wegter, T., Stoytchev, A.: Object categorization in the sink: learning behavior-grounded object categories with water. In: Proceedings of the 2012 ICRA Workshop on Semantic Perception, Mapping and Exploration. Citeseer (2012) 13. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: CVPR, pp. 3431–3440 (2015) 14. Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A., Bengio, Y., Pal, C., Jodoin, P.M., Larochelle, H.: Brain tumor segmentation with deep neural networks. arXiv preprint arxiv:​1505.​03540 (2015) 15. Romera-Paredes, B., Torr, P.H.: Recurrent instance segmentation. arXiv preprint arxiv:​1511.​08250 (2015) 16. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRef 17. Greff, K., Srivastava, R.K., Koutník, J., Steunebrink, B.R., Schmidhuber, J.: Lstm: A search space odyssey. arXiv preprint arxiv:​1503.​04069 (2015) 18. Oh, J., Guo, X., Lee, H., Lewis, R.L., Singh, S.: Action-conditional video prediction using deep networks in atari games. In: Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., Garnett, R. (eds.) NIPS, pp. 2863–2871 (2015) 19. Blender - A 3D Modelling and Rendering Package. Blender Foundation, Blender Institute, Amsterdam (2016) 20. Körner, C., Pohl, T., Rüde, U., Thürey, N., Zeiser, T.: Parallel lattice Boltzmann methods for CFD applications. In: Bruaset, A.R., Tveito, A. (eds.) Numerical Solution of Partial Differential Equations on Parallel Computers. LNCSE, vol. 51, pp. 439–466. Springer, Heidelberg (2006)CrossRef 21. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arxiv:​1408.​5093 (2014) 22. Kingma, D., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arxiv:​1412.​6980 (2014) 23. Levine, S., Koltun, V.: Guided policy search. In: ICML (3), pp. 1–9 (2013) Footnotes 1 The network structure files (prototxt) can be found on our project page at http://​rse-lab.​cs.​washington.​edu/​projects/​liquids/​.   2 Video of the full sequences at https://​youtu.​be/​m5z0aFZgEX8.   3 Full video of results at https://​youtu.​be/​4pbjSqg5zfQ.   4 Video of the full sequences at https://​youtu.​be/​m5z0aFZgEX8.   Aerial Robots 2 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_44 Vision-Based Obstacle Avoidance for Micro Air Vehicles Using an Egocylindrical Depth Map Roland Brockers¹  , Anthony Fragoso¹  , Brandon Rothrock¹  , Connor Lee¹   and Larry Matthies¹   (1) Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, California, USA     Roland Brockers (Corresponding author) Email: brockers@jpl.nasa.gov   Anthony Fragoso Email: afragoso@caltech.edu   Brandon Rothrock Email: brandon.rothrock@jpl.nasa.gov   Connor Lee Email: clee@caltech.edu   Larry Matthies Email: lhm@jpl.nasa.gov Abstract Obstacle avoidance is an essential capability for micro air vehicles. Prior approaches have mainly been either purely reactive, mapping low-level visual features directly to headings, or deliberative methods that use onboard 3-D sensors to create a 3-D, voxel-based world model, then generate 3-D trajectories and check them for potential collisions with the world model. Onboard 3-D sensor suites have had limited fields of view. We use forward-looking stereo vision and lateral structure from motion to give a very wide horizontal and vertical field of regard. We fuse depth maps from these sources in a novel robot-centered, cylindrical, inverse range map we call an egocylinder. Configuration space expansion directly on the egocylinder gives a very compact representation of visible freespace. This supports very efficient motion planning and collision-checking with better performance guarantees than standard reactive methods. We show the feasibility of this approach experimentally in a challenging outdoor environment. 1 Introduction Micro air vehicles (MAVs) require onboard obstacle detection and avoidance systems with minimal size, weight, power, complexity, and cost, using sensors with a very large field of regard for maneuvering in cluttered spaces. Vision-based approaches have excellent potential to address these needs for many applications. In prior work [13], we used stereo vision for forward-looking depth perception and showed that inverse range maps in image space can be used for MAV motion planning, with configuration space (C-space) obstacle expansion done in image space, dynamically feasible trajectories generated in 3-D Cartesian space, and collision-checking done by projecting candidate trajectories into image space to determine whether they intersect obstacles. This is a very efficient approach to geometric representation and collision checking, and the overall approach is quite effective where the goal is obstacle avoidance rather than mapping. In this paper, we extend the total field of regard to about 180[$$^{\circ }$$] by adding side-looking cameras with structure from motion (SfM) for depth perception. We project the depth data from all cameras onto a cylindrical inverse range image we call an egocylinder, and perform C-space expansion on the egocylinder. To reduce the computational cost of motion planning for low-speed flight, we currently use a simple method to choose directions toward more distant goals that stay within freespace shown by the egocylinder. This entire architecture is a step toward our ultimate goal of integrating depth data over time with this data structure and formulating more sophisticated motion planning algorithms directly in image space. Experiments in a challenging outdoor environment have demonstrated the promise of this approach. 2 Related Work Pros and cons of various passive and active sensor options for MAV obstacle avoidance were discussed in [10, 13]; recent examples of MAV systems using multiple types of sensors are described in [5, 14]. Here we focus on methods that use vision. Vision-based approaches break down according to how they do vision, scene representation, and planning and control. The main approaches to vision use optical flow, learning, and/or monocular or stereo depth perception. Optical flow methods typically design reactive control algorithms with optical flow input. Control algorithms for provably stable wall-following and corridor-following behavior have been developed this way [11]; however, navigation that requires a discrete choice among alternate directions requires higher-level perception or reasoning. Machine learning methods have also been used to map optical flow and other monocular visual features into reactive obstacle avoidance behaviors [4], but it is difficult to generalize this approach to work in a wide variety of conditions, so most work on MAV obstacle detection uses depth perception. Forward-looking monocular depth perception via structure from motion (SfM) has been used for MAVs [1, 3], but requires aircraft motion to measure depth and has poor depth perception near the focus of expansion. Stereo vision overcomes these limitations, works well in many outdoor settings in particular, and small, fast stereo implementations are progressing [9, 12, 17]. The predominant approach to scene representation has been to use 2-D or 3-D Cartesian probabilistic grid maps, which can be used with motion planning algorithms that vary from reactive to deliberative and from local to global [5, 8, 17, 18]. These methods are particularly useful for mapping, exploration, or obstacle avoidance in areas that require memory of previously examined avenues; however, they use a lot of storage and computation, and scaling to high speed flight requires multiresolution grids. For obstacle avoidance per se, less expensive representation and planning algorithms are possible. Often these representations are polar in nature, matching the polar angular resolution of the depth sensors [2, 15]. In [16], depth data from two onboard stereo pairs was fused in a cylindrical inverse range map centered on the vehicle. This work introduced C-space obstacle expansion of an image space depth map, though in a limited fashion based on an assumed ground plane. In [13], we generalized the C-space expansion to be based on the actual depth at each pixel and developed the first combination of an image space depth representation with a dynamics-aware motion planner; feasible trajectories were generated in 3-D and projected into image space to do collision checking by testing for intersections with the C-space expanded depth map. This approach to obstacle representation and collision checking is fast and showed good potential in experiments; however, the field of view was limited and the CL-RRT motion planning algorithm was computationally expensive. [] Fig. 1. System architecture. 3 Technical Approach Figure 1 illustrates the sensing, processing, and algorithm architecture of our approach, which is implemented on an AscTec Pelican quadrotor (Fig. 5). To minimize the sensor hardware, we augment forward-looking stereo by adding single cameras aimed 45[$$^{\circ }$$] to each side, giving a total field of regard of about 180[$$^{\circ }$$]. Stereo matching is done with local block matching for speed, which works adequately well in our test environments. To obtain depth perception with the side-looking cameras, we examined several options for using two-frame optical flow algorithms, as well as the LSD-SLAM incremental SfM algorithm [6]. LSD-SLAM constrains optical flow search to epipolar lines computed from estimated camera motion, and incrementally improves depth resolution by using each new image to update depth estimates in keyframes. We found this to be much less noisy than unconstrained optical flow algorithms, so we use this approach. Since monocular SfM has an unobservable scale factor, we estimate scale by comparing SfM range measurements with range from stereo in the image overlap areas between the outer SfM cameras and the stereo cameras. [] Fig. 2. Schematic illustration of stereo and SfM depth maps fusion into the egocylinder representation, and C-space expansion of the egocylinder. Using inverse range, the expansion widens closer objects more than farther objects. To provide a unified depth representation, we project the stereo and scaled SfM depth maps onto a cylinder centered on the aircraft, which is stored as a disparity map where each pixel records the inverse radial distance to the nearest object in the direction of the pixel (Fig. 2). Currently the egocylinder has the orientation of the body frame, though it could be aligned with the gravity vector. C-space expansion is done on this disparity map similarly to [13], which essentially reduces the range at each pixel and widens objects in the depth map in proportion to the width and height of the aircraft plus a safety margin. Using inverse range conveniently gives a finite representation (zero) to objects beyond the maximum range of the sensors. Quantized inverse range also matches the range uncertainty characteristics of vision-based depth estimation. The expanded egocylinder allows the aircraft to be treated as a point for collision checking. In this paper, we evaluate the innovations in the perception and representation system with a simple, fast avoidance algorithm that is safe if there are no major perceptual errors. At the obstacle densities and velocities considered here, we employ a reduced dynamical model in which the vehicle can turn with infinite agility but requires a finite distance to come to a stop. Accordingly, we restrict the set of possible vehicle trajectories at any instant to the set of straight lines extending radially from the vehicle. Collision-free trajectories are then extracted from this set by transforming the vehicle velocity into an inverse-range safety horizon that is based on the time required to come to a complete stop. After first checking the goal direction in order to avoid a search if possible, the entire planning horizon is checked against the egocylinder to eliminate flight directions that violate the safety horizon constraint. A simple scan of the remaining pixels then returns the collision-free direction that is closest to the goal direction (Fig. 3). [] Fig. 3. Motion planning schematic and simulation. Left: selected flight direction to avoid obstacle. Right: simulated flights through cluttered environments without cul-de-sacs were successful up to speeds over 15 m/s (top view). To maximize safety and visibility of the scene ahead, a low-level controller executes the command by yawing the cameras towards the requested direction while separate PID loops maintain forward velocity and eliminate side slip around the turn. We have also implemented a simple temporal filtering feature that provides robustness against noisy or missing depth data — once a point on the planning horizon is chosen, it is propagated forward with the motion of vehicle for a few cycles and assigned a preference over the egospace pixel scan. In addition to reducing latency by allowing the planning pipeline to be bypassed most of the time, this memory feature tends to smooth out the flight of the vehicle through complicated or noisy environments, in which the target would otherwise change frequently, and prevents dropped frames or other gaps in visual input from ending a flight. This entire planning approach is very fast, safe, and allows us to focus on evaluating perception at the cost of sacrificing algorithmic completeness and strict satisfaction of the full vehicle dynamics. Ongoing work will employ a more sophisticated image space motion planning algorithm that can accommodate these issues. Figure 1 shows how all parts of the algorithm mapped onto our three-level processor architecture. Images are processed at [$$384 \times 240$$] pixel resolution. Stereo runs at 5 fps, LSD-SLAM at 10 fps, and the egocylinder and motion plan are updated at the stereo frame rate of 5 fps. Planning itself takes under a millisecond to verify that the current direction is still safe, and a few milliseconds if it is necessary to search for a new direction. Visual-inertial state estimation is done with a nadir-pointed camera using methods from [19]. Operating outdoors in areas with bright sunlight and deep shadow is difficult, because it creates very large intra-scene (within the same image) and inter-scene (between successive images) illumination variations that greatly exceed the linear dynamic range of available cameras. This has been especially problematic in experiments we have conducted in a grove of trees (see Sect. 4) using a nadir-pointed camera for state estimation. The most effective way to address this is to improve dynamic range at the sensor level. There are multiple potential ways to do this. Some approaches acquire multiple images separated in time and combine these in software; this is impractical on a moving robot. Another approach uses hardware design in the imager that provides a multi-linear exposure mode that approximates a logarithmic image response. This mode is implemented in the Matrix Vision mvBlueFOX-200w CMOS cameras we use and can extend the total dynamic range from 55 dB to 110 dB. These cameras have three linear segments in their photometric response function, where the slope and transition point of the second and third segments is controlled by a two sets of knee point parameters. Creating a good exposure for given scene conditions requires choosing the total exposure time and setting appropriate knee point parameters. [] Fig. 4. Non-HDR (left) and HDR (right) images in a forest scene. Large areas are saturated or under-exposed in the non-HDR image. The HDR image has a better distribution of intensity values, which leads to better performance of vision algorithms. We have taken a first step toward exploiting this multi-linear HDR mode in the following camera initialization procedure, which is run once at the start of an experiment (Fig. 4). First, we acquire a series of images while adjusting exposure time via gradient descent to push the average intensity of the image stream towards a target intensity in the middle of the pixel brightness range. Next, we fix the total exposure time while seeking the parameters of each knee point that maximize image entropy. In an iterative process, each knee point is set sequentially to maximize local entropy. This does not simultaneously optimize the setting of both knee points, but it avoids extra parameters and has shown to improve feature tracking performance. Once the exposure parameters are initialized, they are fixed for the duration of the flight, which has been adequate in our test conditions to date. Ideally, exposure should be optimized on every frame; however, our current optimization procedure is too slow for that and large changes of exposure have potential to require changes to feature tracking algorithms to maintain landmark tracking across exposure discontinuities. The latter issue was out of scope for this paper. 4 Results We have conducted low-speed ([$$<1$$] m/s) experimental trials in a grove of trees that provided a relatively high obstacle frequency (Fig. 5). Flights totaled over 500 meters in aggregate length, during which 65 trees were encountered. This area had very difficult illumination conditions due to the combination of brightly sunlit and deeply shadowed areas in the same image. Figure 6 shows results of the vision pipeline at several points during such a run, as well as a 2-D map produced after the fact from data logs. [] Fig. 5. Left: Grove of trees test area; Right: AscTec Pelican with 4 camera head. [] Fig. 6. Results of a 20 m experimental flight through a grove of trees. Top: the results of the perception system for three different locations on the run, showing the left rectified stereo image, the fused egocylinder, and the C-space expanded egocylinder with selected direction of flight (red crosses). This only shows the central 180[$$^{\circ }$$] of the egocylinder. Bottom: a top down 2-D plot of the trajectory and nearby obstacle pixels from the egocylinder over the whole run. Arrows and numbers on the trajectory show where the three images above were acquired. Vehicle speed was 1 m/s throughout. The saturated and underexposed areas of the images in Fig. 4 illustrate the dynamic range problem with these illumination conditions. While the C-space expansion effectively fills in many areas that have missing data in depth maps from stereo and SfM, this forest environment was particularly challenging for the visual-inertial state estimation system. Therefore, we focused HDR experiments on the state estimation camera, where use of the HDR mode improved the average percentage of map landmarks that could be matched in each frame from 61% to 79%. Nevertheless, the floor of the forest had many very small, self-similar features, and doing state estimation with a nadir-pointed camera while flying low (< 2 m above the ground) in this environment still made state estimation by far the weakest link in the system. The detection and evasion portions of the architecture were very reliable in the performance evaluation experiments, which were analyzed quantitatively by noting the frequency and cause of any human intervention required to avoid a collision. These modules were responsible for only a single intervention event during the 521 m recorded, which resulted in successful avoidance of 64 out of 65 trees for an success rate of [$$98\%$$]. The intervention was attributed to a missed detection in which the vehicle had drifted too close to an obstacle and could no longer detect it using stereo matching. LSD-SLAM failed to adequately track features about 25% of the time; with data logging, the LSD-SLAM frame rate dropped to about 8 Hz, which is too slow for this algorithm to be reliable. However, this did not impact overall obstacle avoidance performance, because the control policy of first turning the stereo cameras towards the flight direction, and incorporating a small amount of path hysteresis, provided a high degree of robustness to missed left or right camera LSD-SLAM depth maps, which were seamlessly reacquired beginning on the next frame. 5 Main Experimental Insights Using C-space expansion of image space depth maps for collision checking is a very new approach to obstacle avoidance for MAVs. In our experiments to date, obstacle avoidance has been quite successful; in 521 m of flight in challenging conditions, only one intervention was needed in 65 encounters with obstacles, and no problems with false alarms in freespace were apparent. This is significant, since the approach so far does not include explicit temporal fusion for false alarm suppression or filling in missing data, unlike approaches based on voxel maps. Nevertheless, work is in progress to add temporal fusion to image space representations to address the finite probability that these problems will eventually occur. By far the biggest performance problem in this system is with visual state estimation. Using a nadir pointed camera while flying low (< 2 m above ground) in a scene with a very high dynamic range of illumination and many small, self-similar features (leaves) on the forest floor seems to be at the heart of the problem. We plan to address this in several ways in ongoing work, including using visual feature tracking in the forward and sideward looking cameras. LSD-SLAM was successful as a source of side-looking depth data, but it requires a high frame rate ([$${>}10$$] Hz) and accurate calibration of camera extrinsics to maintain its usefulness for obstacle detection, both of which were problematic in this implementation. Side-looking stereo cameras might be easier to use, but would lack the potential of exploiting increasing motion baselines to improve depth resolution that exists with recursive approaches to structure from motion. Ultimately, combining both may be a good approach, as is explored in a recent stereo extension of LSD-SLAM [7]. The disparity-space reactive planner extends the advantages of the C-space expansion method and egocylinder to the planning regime — potential trajectories are selected and executed in a highly economical fashion by employing the same framework that allows the egocylinder to represent obstacles compactly and efficiently. Overall, this choice of representation demonstrates decreased planning latency and complexity compared to world-coordinate methods. There is a close connection between vehicle velocity, uncertainty in the range data, and successful obstacle avoidance — this has not emerged as an issue for the slow speeds of our experiments so far, but for reliable obstacle detection to scale to high speeds, this interplay will require further study. Several approaches may improve the maximum range and range resolution of the system to support higher velocities, including the use of higher resolution imagery and potentially the use of temporal fusion of depth maps for improved range resolution. Scenes involving moving obstacles will require extensions of both the perception and the planning elements of this system. Acknowledgments This work was funded by the Army Research Laboratory under the Micro Autonomous Systems & Technology Collaborative Technology Alliance program (MAST-CTA). JPL contributions were carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. References 1. Alvarez, H., Paz, L.M., Sturm, J., Cremers, D.: Collision avoidance for quadratures with a monocular camera. In: Hsieh, M.A., Khatib, O., Kumar, V. (eds.) Experimental Robotics. Springer Tracts in Advanced Robotics, vol. 109, pp. 195–209. Springer, Heidelberg (2016)CrossRef 2. Bajracharya, M., Howard, A., Matthies, L., Tang, B., Turmon, M.: Autonomous off-road navigation with end-to-end learning for the LAGR program. Field Robot. 26(1), 3–25 (2009)CrossRefMATH 3. Daftry, D., Dey, D., Sandhawalia, H., Zeng, S., Bagnell, J.A., Hebert, M.: Semi-dense visual odometry for monocular navigation in cluttered environments. In: IEEE International Conference on Robotics and Automation, Workshop on Recent Advances in Sensing and Actuation for Bioinspired Agile Flight (2015) 4. Dey, D., et al.: Vision and learning for deliberative monocular cluttered flight. In: 10th Conference on Field and Service Robotics (2015) 5. Droeschel, D., Nieuwenhuisen, M., Beul, M., Holz, D., Stuckler, J., Behnke, S.: Multi-layered mapping and navigation for autonomous micro air vehicles. Field Robot. (2015) 6. Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: European Conference on Computer Vision (ECCV), September 2014 7. Engel, J., Stuckler, J., Cremers, D.: Large-scale direct SLAM with stereo cameras. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, September 2015 8. Fraundorfer, F., Heng, L., Honegger, D., Lee, G.H., Meier, L., Tanskanen, P., Pollefeys, M.: Vision-based autonomous mapping and exploration using a quadrotor MAV. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2012) 9. Goldberg, S.B., Matthies, L.: Stereo and IMU assisted visual odometry on an OMAP3530 for small robots. In: IEEE Conference on Computer Vision and Pattern Recognition, Workshop on Embedded Computer Vision (2011) 10. Kendoul, F.: Survey of advances in guidance, navigation, and control of unmanned rotorcraft systems. Field Robot. 29(2), 315–378 (2012)CrossRef 11. Keshavan, J., Gremillion, G., Alvarez-Escobar, H., Humbert, J.S.: Autonomous vision-based navigation of a quadrature in corridor-like environments. Int. J. Micro Air Veh. 7(2), 111–123 (2015)CrossRef 12. Kuhn, M., Moser, S., Isler, O., Gurkaynak, F.K., Burg, A., Felber, N., Kaelin, H., Fichtner, W.: Efficient ASIC implementation of a real-time depth mapping stereo vision system. In: IEEE 46th Midwest Symposium on Circuits and Systems (2003) 13. Matthies, L., Brockers, R., Kuwata, Y., Weiss, S.: Stereo vision-based obstacle avoidance for micro air vehicles using disparity space. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 3242–3249 (2014) 14. Nuske, S., Choudhury, S., Jain, S., Chambers, A., Yoder, L., Scherer, S., Chambelain, L., Cover, H., Singh, S.: Autonomous exploration and motion planning for an unmanned aerial vehicle navigating rivers. Field Robot. 32(8), 1141–1162 (2015)CrossRef 15. Oleynikova, H., Honegger, D., Pollefeys, M.: Reactive avoidance using embedded stereo vision for MAV flight. In: IEEE International Conference on Robotics and Automation (2015) 16. Otte, M.W., Richardson, S.G., Mulligan, J., Grudic, G.: Path planning in image space for autonomous robot navigation in unstructured outdoor environments. Field Robot. 26(2), 212–240 (2009)CrossRef 17. Schmid, K., Lutz, P., Tomic, T., Mair, E., Hirschmuller, H.: Autonomous vision-based micro air vehicle for indoor and outdoor navigation. Field Robot. 31(4), 537–570 (2014)CrossRef 18. Shen, S., Michael, N., Kumar, V.: 3D indoor exploration with a computationally constrained MAV. In: Robotics: Science and Systems (2011) 19. Weiss, S., Achtelik, M., Lynen, S., Achtelik, M., Kneip, L., Chli, M., Siegwart, R.: Monocular vision for long-term micro aerial vehicle state estimation: a compendium. Field Robot. 30(5), 803–831 (2013)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_45 Transformable Multirotor with Two-Dimensional Multilinks: Modeling, Control, and Whole-Body Aerial Manipulation Moju Zhao¹  , Koji Kawasaki¹  , Xiangyu Chen¹  , Yohei Kakiuchi¹  , Kei Okada¹   and Masayuki Inaba¹   (1) JSK Lab Engineering Building 2, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan     Moju Zhao (Corresponding author) Email: chou@jsk.imi.i.u-tokyo.ac.jp URL: http://www.jsk.t.u-tokyo.ac.jp/index.html   Koji Kawasaki URL: http://www.jsk.t.u-tokyo.ac.jp/index.html   Xiangyu Chen URL: http://www.jsk.t.u-tokyo.ac.jp/index.html   Yohei Kakiuchi URL: http://www.jsk.t.u-tokyo.ac.jp/index.html   Kei Okada URL: http://www.jsk.t.u-tokyo.ac.jp/index.html   Masayuki Inaba URL: http://www.jsk.t.u-tokyo.ac.jp/index.html Abstract In this paper, we introduce a novel type of the multirotor aerial vehicle with two-dimensional multilinks which enables the stable aerial transformation for high mobility in three-dimensional environments. Our goal is to hold and carry object by using the whole-body manipulation in the air. The research involved three steps. First, we developed the modeling of the link modules that compose a multirotor with two-dimensional multilinks and conducted a quadrotor prototype. Second, we derived a stable flight control method for aerial transformation on the basis of linear-quadratic-integral optimal control. Third, we investigated the whole-body aerial manipulation based on the enveloping grasping method for the four-link type which takes the additional inertial parameters and joint torque into account. Successful aerial transformation and manipulation with the quadrotor prototype were demonstrated, confirming the feasibility of our proposed transformable multirotor for aerial maneuvering. Keywords Transformable multirotorModeling and controlAerial manipulation 1 Introduction Our main interest is the aerial manipulation which uses the whole body to grasp the target object. Many related works achieved the aerial manipulation by attaching the arm manipulator to the bottom of aerial robot [1, 2] and using gripper to grasp object [3, 4]. However, several challenges must be met while operating the manipulation using arm manipulator, including the difficulties to keep attitude balance of aerial robot while the center of gravity of manipulator changes, and to grasp object larger than the gripper. Herein, we have therefore focused on the potential of the whole-body aerial manipulation using transformable multirotor as shown in Fig. 1 to address the challenges as state previously. The transformable multirotor is an area of active research. Most of the work has been done on rotors with tilt mechanisms to allow the direction of the thrust to be varied [5, 6]. However, the performance of manipulation can not be improved with tilt rotor structure. Therefore we developed the multirotor with two-dimensional multilinks of which the whole body can be regarded as an arm to operate manipulation [7]. [] Fig. 1. Main contributions of this research: (1) modeling of a multirotor with two-dimensional multilinks; (2) flight control enabling stable aerial transformation; (3) whole-body aerial manipulation. The main contributions of our work can be summarized as follows: - Modeling of a transformable multirotor aerial vehicle with two-dimensional multilinking. - Control method for stable hovering flight during aerial transformation based on LQI control theory. - Enveloping grasp method of the whole-body aerial manipulation with the quadrotor type with 4 links, based on the estimation of the inertial parameters and joint torque control. - Experiments of the aerial transformation and manipulation which confirms the feasibility of our proposed transformable multirotor. 2 Approach 2.1 Modelling of the Transformable Multirotor We investigated a transformable multirotor comprising link modules with built-in propellers and joint modules with the same rotational axis, allowing the two dimensional transformation. As shown in Fig. 2, each of the link modules has a single propeller at the center, while the joint modules are attached at the two ends of the link modules. The dynamic model is shown in Fig. 3, where [$$F\_i$$][N] and [$$T\_i$$][Nm] denote the lifting force and torque generated by the rotation of the propeller. [] Fig. 2. Base structure of proposed multirotor and the link module. At the end of the module, there are joint modules with servo motors. [] Fig. 3. The dynamic model of the quadrotor prototype described in coordinate frame [$$\{CoG\}$$]. [$$F\_i$$] and [$$T\_i$$] denote the lifting force and torque generated by each propeller The dynamics of the translational motion and rotational motion can be written as follows: [$$\begin{aligned} M\ddot{{\varvec{r}}}\_{{CoG}} = \left[ \begin{array}{c} 0 \\ 0 \\ -Mg\\ \end{array} \right] + R\left[ \begin{array}{c} 0 \\ 0 \\ {\mathop \sum \nolimits }\_{i=1}^{N} F\_i \end{array} \right] ; R =R\_z\left( \psi \right) R\_y\left( \theta \right) R\_x\left( \varphi \right) \end{aligned}$$] (1) [$$\begin{aligned} I\_{multilink} \left[ \begin{array}{c} \dot{w\_x} \\ \dot{w\_y} \\ \dot{w\_z} \\ \end{array} \right] = \left[ \begin{array}{c} {\mathop \sum \nolimits }\_{i=1}^{N}y\_{i} F\_{i}\\ -{\mathop \sum \nolimits }\_{i=1}^{N}x\_{i} F\_{i} \\ {\mathop \sum \nolimits }\_{i=1}^{N} T\_{i}\\ \end{array} \right] - \left[ \begin{array}{c} w\_x \\ w\_y \\ w\_z \\ \end{array} \right] \times I\_{multilink} \left[ \begin{array}{c} w\_x \\ w\_y \\ w\_z \\ \end{array} \right] \end{aligned}$$] (2) where [$${\varvec{r}}\_{{CoG}}$$] and [$$I\_{multilink}$$] are the positions of the center of mass and the principal moment of inertia respectively. [$$ \left[ \begin{array}{ccc} w\_x&w\_y&w\_z \end{array} \right] ^{\mathrm {T}} $$] is the vector of the angular velocity of the multilink, while [$$ \left[ \begin{array}{ccc} \dot{\varphi }&\dot{\theta }&\dot{\psi } \end{array} \right] ^{\mathrm {T}}$$] are the time derivatives of Eular angles standing for roll, pitch and yaw respectively. 2.2 Attitude and Altitude Control Based on LQI Control As described in our previous work [7], the dynamics of the model can be integrated into the simultaneous equations as follows: [$$\begin{aligned}&\qquad \qquad \qquad \qquad \quad \,\,\, \ddot{\varvec{y}} = P {\varvec{u}} - {\varvec{G}} \\&{\varvec{y}} = \left[ \begin{array}{cccc} z &{} \varphi &{} \theta &{} \psi \\ \end{array} \right] ^{\mathrm {T}} ;~ {\varvec{u}} = \left[ \begin{array}{ccc} F\_1 &{} \cdots &{} F\_N \\ \end{array} \right] ^{\mathrm {T}} ;~ {\varvec{G}} = \left[ \begin{array}{cccc} g &{} 0 &{} 0 &{} 0 \\ \end{array} \right] ^{\mathrm {T}} \nonumber \end{aligned}$$] (3) Note that matrix P represents the configuration of the multilinks, including the arrangement of the propellers. [$$\begin{aligned}&\qquad \qquad \qquad \qquad \qquad \qquad \qquad P = \left[ \begin{array}{llll} \mathbf{p}\_z &{} \mathbf{p}\_x &{} \mathbf{p}\_y &{} \mathbf{p}\_c \\ \end{array} \right] ^{\mathrm {T}} \\&\mathbf{p}\_z = \left[ \begin{array}{lll} \bar{m}\_1 &{} \cdots &{} \bar{m}\_N \\ \end{array} \right] ^{\mathrm {T}} ; \mathbf{p}\_x = \left[ \begin{array}{lll} \bar{x}\_1 &{} \cdots &{} \bar{x}\_N \\ \end{array} \right] ^{\mathrm {T}} ; \mathbf{p}\_y = \left[ \begin{array}{lll} \bar{y}\_1 &{} \cdots &{} \bar{y}\_N \\ \end{array} \right] ^{\mathrm {T}} ; \mathbf{p}\_c = \left[ \begin{array}{lll} \bar{c}\_1 &{} \cdots &{} \bar{c}\_N \\ \end{array} \right] ^{\mathrm {T}} \nonumber \end{aligned}$$] (4) where [$$\bar{x}\_i, \bar{y}\_i$$], [$$\bar{c}\_i$$] and [$$\bar{m}\_i$$] are normalized products. [$$\begin{aligned} \bar{x}\_i = \frac{-x\_i}{I\_{multi \\_ yy}},~\bar{y}\_i = \frac{y\_i}{I\_{multi \\_ xx}},~\bar{c}\_i = \frac{c\_i}{I\_{multi \\_ zz}},~\bar{m}\_{i} = \frac{1}{M} \end{aligned}$$] (5) We introduce the following state equation for LQI system about attitude and altitude control with a new state ([$${\varvec{x}} = \left[ \begin{array}{cccccccc} z&\dot{z}&\varphi&\dot{\varphi }&\theta&\dot{\theta }&\psi&\dot{\psi } \end{array} \right] ^{\mathrm {T}}$$]). [$$\begin{aligned}&\dot{{\varvec{x}}} = A {\varvec{x}} + B {\varvec{u}} + {\varvec{d}} \end{aligned}$$] (6) [$$\begin{aligned}&\qquad \qquad \quad {\varvec{y}} = C{\varvec{x}}\\&{\varvec{x}} \in R^{8}, {\varvec{u}} \in R^{N}, {\varvec{y}} \in R^{4}, {\varvec{d}} \in R^{8} \nonumber \end{aligned}$$] (7) where [$$\begin{aligned} A = \left[ \begin{array}{cccccccc} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \end{array} \right] ,\; B = \left[ \begin{array}{cccccccc} \mathbf{0}&\mathbf{p}\_z&\mathbf{0}&\mathbf{p}\_x&\mathbf{0}&\mathbf{p}\_y&\mathbf{0}&\mathbf{p}\_c \end{array} \right] ^{\mathrm {T}} ,\; C = \left[ \begin{array}{cccccccc} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0 \\ 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 1 &{} 0 \\ \end{array} \right] \end{aligned}$$] In Eqs. 6 [$$\sim $$] 7, [$${\varvec{u}}$$] and [$${\varvec{y}}$$] are the input and output of the control system, while [$${\varvec{d}} = \left[ \begin{array}{cccccccc} 0 &{} -g &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \end{array} \right] ^{\mathrm {T}} $$] can be regarded as constant noise in the control system. We extend the state equation by modifying the state and input as follows: [$$\begin{aligned} \tilde{{\varvec{x}}} \equiv {\varvec{x}} - {\varvec{x}}\_s ;\quad \tilde{{\varvec{u}}} \equiv {\varvec{u}} - {\varvec{u}}\_s \end{aligned}$$] (8) where [$${\varvec{x}}\_s$$] and [$${\varvec{u}}\_s$$] are the final values at the steady state. As shown in Eq. 9, we also introduce a tracking error between the reference input and the system output [$${\varvec{e}}$$] and the integral value [$${{\varvec{v}}}$$] [$$\begin{aligned} \dot{{\varvec{v}}} = {\varvec{e}} = {\varvec{r}} - {\varvec{y}} = C {\varvec{x}}\_s - C{\varvec{x}} = - C \tilde{{\varvec{x}}} \end{aligned}$$] (9) Based on the extended state equation (Eq. 10), we design a cost function given by Eq. 11. [$$\begin{aligned}&\qquad \qquad \,\,\, \dot{\bar{{\varvec{x}}}} = \bar{A} \bar{{\varvec{x}}} + \bar{B} \tilde{{\varvec{u}}} \\&\bar{{\varvec{x}}} = \left[ \begin{array}{c} \tilde{{\varvec{x}}} \\ {\varvec{v}} \end{array} \right] ; \bar{A} =\left[ \begin{array}{cc} A &{} O\_{{8,4}}\\ -C &{} O\_{{4,4}} \end{array} \right] ; \bar{B} =\left[ \begin{array}{c} B \\ O\_{{4,N}} \end{array} \right] \nonumber \end{aligned}$$] (10) [$$\begin{aligned} J = \int ^{\infty }\_0 \left( \bar{{\varvec{x}}}^{\mathrm {T}} Q \bar{{\varvec{x}}} + \tilde{{\varvec{u}}}^{\mathrm {T}} R \tilde{{\varvec{u}}} \right) dt \end{aligned}$$] (11) where Q and R are the positive semi-definite and positive definite matrices respectively, which effect the convergence characteristics of the control system. The optimal input [$${\varvec{u}}\_0$$] which provides the minimum cost can be obtained by solving the algebraic Riccati equation (ARE) of Eq. 12. [$$\begin{aligned}&\bar{A}^{\mathrm {T}} \varPi + \varPi \bar{A} + Q - \varPi \bar{B} R^{-1} \bar{B}^{\mathrm {T}} \varPi = 0 \end{aligned}$$] (12) [$$\begin{aligned}&K \equiv \left[ \begin{array}{cc} K\_1&K\_2 \end{array} \right] = - R^{-1} \bar{B}^{\mathrm {T}} \varPi \end{aligned}$$] (13) [$$\begin{aligned}&{\varvec{\tilde{u}}}\_0 = K\bar{\varvec{x}} \end{aligned}$$] (14) The final input strategy for the attitude and altitude control system is derived as follows: [$$\begin{aligned}&{\varvec{u}}\_0 = K\_1 {\varvec{x}} + K\_2 {\varvec{v}} + N {\varvec{r}} \end{aligned}$$] (15) [$$\begin{aligned}&N = - K\_1 C^{\mathrm {T}} \end{aligned}$$] (16) 2.3 Whole-Body Aerial Manipulation Related works about the enveloping grasps can be classified into two categories: force-closure [8] and form-closure [9]. Although the kinematics of the multilinks and grasped object have been analyzed completely, the geometric features of proposed multirotor in this paper is not suitable for the previous works which regard the link as rod model. Therefore we develop the original enveloping grasp method as shown in Fig. 4, which focuses on the quadrotor type with 4 links. In order to simplify the grasp issue, we only take box-shaped object into consideration, and the angle of second joint ([$$\theta \_2$$]) should always be [$$\frac{\pi }{2}$$]. Also note that the axial symmetry exists around the dotted line in Fig. 4. With the simplification rules, the angles of joints and the range of the side length of box (d) can be derived as follows: [$$\begin{aligned} \theta \_1 = \theta \_3 = arccos(\frac{2(2r+d-l)}{l}) \end{aligned}$$] (17) [$$\begin{aligned} l - 2r< d < \root \of {{\frac{l^2}{4} - r^2}} + l - 2r \end{aligned}$$] (18) In order to guarantee the performance of the whole-body grasping on the air, we also proposed the estimation method to obtain inertial parameters of the grasped object. We first estimate the mass by calculating the offset of the total thrust after picking up the object. As to the moment of inertia, the equation of simple box model ([$$I\_{xx} = I\_{yy} = I\_{zz} = \frac{md^2}{6}; I\_{xz} = I\_{xy} = I\_{yz} = 0$$]) is applied, since we assume that the mass of the object is distributed uniformly. Although the dynamic model (Eq. 4) changes due to the attachment of the grasped object, we consider that the integral action in LQI control system (Eq. 15) can compensate the bias if the values of inertial parameters are much smaller than these of the multirotor. [] Fig. 4. Whole-body enveloping grasp strategy in the case of grasping simple boxed-shape object with quadrotor types. The control method about joint torque is developed to provide enough vertical friction at contact points which is the direct fact for pickup. To generate the certain torque, the angles of joints for enveloping should smaller than the designed angles described in Eq. 17 with certain offset ([$$\varDelta \theta $$]) which matches following condition for each joint torque. We set the upper limit since the overload may occur in the case of real servo motor. [$$\begin{aligned} \tau \_{min}< \tau \_i(\theta \_1 + \varDelta \theta , \theta \_2 + \varDelta \theta , \theta \_3 + \varDelta \theta ) < \tau \_{max} \end{aligned}$$] (19) 3 Experiments 3.1 Multirotor Platform We conducted flight experiments using the quadrotor prototype as shown in Fig. 5. The length of each link is 0.44[m], while the diameter of the propeller protector is 0.31[m]. The whole weight is 2.05 Kg. The rotation range of each joint is [$$-\frac{\pi }{2}[rad] \sim \frac{\pi }{2}[rad]$$]. As the multilink can transform only in two dimensions, the attitude estimated by the onboard IMU is treated as that of the entire body. On the other hand, the three-dimensional position was tracked by the motion capture system, representing nearly ground truth, and sent to the onboard controller board via WiFi. [] Fig. 5. The hardware components of the prototype which contains 4 links. 3.2 Aerial Transformation We performed experiments on aerial transformation with the following path as shown in Fig. 6(Right a). Representative images from the aerial transformation are given in Fig. 6(Left), while the positional error and attitude associated in the change of joint angles are shown in Fig. 6(Right b) and (Right c). Note that x and y errors were less than 0.2[m] and 0.1[m] respectively, and the pitch and roll angles were always less than 0.07[rad]. These experimental results show stability of hovering during the aerial transformation. [] Fig. 6. Left: Representative images from the experimental trial of the prototype transforming in the air. Right(a): the path of joint angles of real machine, based on the motion planning. Right(b): the change of the attitude in terms of pitching and rolling. Right(c): the change of the error about the 3D position and yaw angle at a target hovering point. 3.3 Whole-Body Aerial Manipulation In the aerial manipulation experiment, we used the box of which the side length is 200[mm] and the weight is 490[g] as the target object. According to the proposed enveloping rules as described in Eqs. 17 and 19, we set to the goal angles as [$$\theta \_1 = 1.33[rad], \theta \_2 = 1.57[rad], \theta \_3 = 1.33[rad]$$]. The motion capture was also used to localize the position and orientation of the target object. As shown in Fig. 7, the process of aerial manipulation can be summarized as follows: (1) moving close to object with the normal form ([$$\theta \_i = \frac{\pi }{2}$$]) and hovering above the object; (2) changing the form with a certain margin ([$$\theta \_1 = 0.8[rad], \theta \_2 = 1.2[rad], \theta \_3 = 0.8[rad]$$]) and descending to surround the object; (3) changing form to grasp the object; (4) ascending to pick up the object; (5) moving over recovery box and dropping object. As shown in Fig. 8(b), the difference of height between aerial robot and grasped object is relatively constant, indicating the effectiveness of the enveloping grasp. We considered that the variation of the relative difference during grasping is caused by the tilt of aerial robot for position control as shown in Fig. 8(a). On the other hands, Fig. 8(c) and (d) also showed the enveloping grasp in terms of torque. However, the error of the position control caused the difference between the final angles of joints and goal angles, and the yaw control caused the variation of torque at joint2 which was below the minimum value at some times. However, this torque never change the signal, which guaranteed the enough friction in the vertical direction. As to the inertial parameters of object, we estimated that the mass of object is about 450[g] as shown in Fig. 8(b), which is almost correspond with the real value. Then the inertia can be calculated as: [$$I\_{xx} = I\_{yy} = I\_{zz} = 0.003$$], which can be neglected since the inertia of aerial robot ([$$I\_{xx} = 0.073, I\_{yy} = 0.081, I\_{zz} = 0.15$$]) is large enough. [] Fig. 7. The process of aerial manipulation. [] Fig. 8. (a) The change of the attitude in terms of pitching and rolling. (b) The difference of height between aerial robot and object, along with the total thrust calculated from LQI control system during flight. (c) The torque of each joint. (d) The angle of each joint. 4 Conclusions and Future Work In this work, we detailed a methodology for aerial grasping by the whole-body manipulation. We first presented the modeling method of the multirotor with two-dimensional multilink. We introduced a flight control method for stable hovering under arbitrary forms based on an LQI optimal servomechanism control system. We also developed the whole-body aerial manipulation which calculates the angles of 4 link model using the planar grasping method which takes the joint torque and inertial parameters into account. We closed by analyzing experimental results, showing the feasibility of aerial transformation and aerial manipulation using the quadrotor type prototype. We are interested in moving forward by enhancing our current enveloping grasp method to extend to arbitrary link number and arbitrary shape of the object with optimal enveloping angles and torques of joints. We are also interested in developing the object shape detection by onboard sensors to address the realtime grasp planning for object with unknown shape. References 1. Kondak, K., Huber, F., Schwarzbach, M., Laiacker, M., Sommer, D., Bejar, M., Ollero, A.: Aerial manipulation robot composed of an autonomous helicopter and a 7 degrees of freedom industrial manipulator. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 2107–2112, May 2014 2. Lippiello, V., Cacace, J., Santamaria-Navarro, A., Andrade-Cetto, J., Trujillo, M.A., Esteves, Y.R., Viguria, A.: Hybrid visual servoing with hierarchical task composition for aerial manipulation. IEEE Rob. Autom. Lett. 1(1), 259–266 (2016)CrossRef 3. Mellinger, D., Lindsey, Q., Shomin, M., Kumar, V.: Design, modeling, estimation and control for aerial grasping and manipulation. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2668–2673, September 2011 4. Michael, N., Fink, J., Kumar, V.: Cooperative manipulation and transportation with aerial robots. In: Proceedings of Robotics: Science and Systems, Seattle, USA, June 2009 5. Kawasaki, K., Motegi, Y., Zhao, M., Okada, K., Inaba, M.: Dual connected bi-copter with new wall trace locomotion feasibility that can fly at arbitrary tilt angle. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 524–531, September 2015 6. Barkai, S., Rand, O., Peyran, R., Carlson, R.: Modeling and analysis of tilt-rotor aeromechanical phenomena. Math. Comput. Model. 27(12), 17–43 (1998)CrossRef 7. Zhao, M., Kawasaki, K., Okada, K., Inaba, M.: Transformable multirotor with two-dimensional multilinks: modeling, control, and motion planning for aerial transformation. Adv. Rob. 30(13), 825–845 (2016)CrossRef 8. Watanabe, T., Harada, K., Yoshikawa, T., Jiang, Z.: Towards whole arm manipulation by contact state transition. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5682–5687, October 2006 9. Reyes, F., Ma, S.: On planar grasping with snake robots: form-closure with enveloping grasps. In: 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 556–561, December 2014 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_46 Localization of a Ground Robot by Aerial Robots for GPS-Deprived Control with Temporal Logic Constraints Eric Cristofalo¹  , Kevin Leahy², Cristian-Ioan Vasile³, Eduardo Montijano⁴, Mac Schwager¹ and Calin Belta² (1) Stanford University, Stanford, California, USA (2) Boston University, Boston, Massachusetts, USA (3) Massachusetts Institute of Technology, Cambridge, Massachusetts, USA (4) Centro Universitario de la Defensa (CUD), Zaragoza, Spain     Eric Cristofalo Email: ecristof@stanford.edu Abstract In this work, we present a novel vision-based solution for operating a vehicle under Gaussian Distribution Temporal Logic (GDTL) constraints without global positioning infrastructure. We first present the mapping component that builds a high-resolution map of the environment by flying a team of two aerial vehicles in formation with sensor information provided by their onboard cameras. The control policy for the ground robot is synthesized under temporal and uncertainty constraints given the semantically labeled map. Finally, the ground robot executes the control policy given pose estimates from a dedicated aerial robot that tracks and localizes the ground robot. The proposed method is validated using a two-wheeled ground robot and a quadrotor with a camera for ten successful experimental trials. Keywords Vision-based localizationTemporal logic planningAir-ground localizationHeterogeneous robot systems 1 Introduction In this paper, we propose a solution to the following problem: localize and control a ground robot under temporal logic (TL) specifications in an environment with no global positioning infrastructure. Robots operating in the real world typically require accurate pose estimates to compute effective control actions, but in many cases, such as dense urban environments [1], GPS may be unavailable or unreliable. Furthermore, it is advantageous to consider an aerial robot for on-the-fly tracking of the ground robot because it can aid in terms of localization as well as obstacle avoidance, leaving the ground robot dedicated to other tasks. In this work, we present a vision-based, GPS-denied solution to this problem and demonstrate it experimentally with a sensor-deprived ground robot that performs a persistent monitoring task specified by TL, while being localized by a camera-equipped autonomous aerial vehicle (quadrotor). The solution is split into three major components: map building in unknown environments, control synthesis under TL constraints, and localization during the mission. We use vision-based formation control to build the map from multiple aerial vehicles because we obtain a high fidelity mosaic map image without requiring SLAM or other complex mapping algorithms. Our algorithm synthesizes the ground robot’s control policy based on a labeled version of the map and a TL specification. Finally, the ground robot executes the control policy while an aerial robot provides pose measurements. Consider a robot that must perform the following task in an outdoor disaster site: “Periodically collect soil samples from the forest, then the beach. Deliver samples to researchers. Go to a charging station after tasks are complete. Always avoid known obstacles and restricted zones. Ensure that the uncertainty (measured by variance) of the robot’s pose is always below 1 m[$$^2$$].” Such a task may be specified using Gaussian distribution TL (GDTL) [2], a specification language that incorporates the robot’s desired position as well as uncertainty. Unfortunately, the initial position of the robot is completely unknown and common methods to synthesize a control policy for the robot, even while operating under observation noise, will not be sufficient. Our solution alternatively requires the deployment of a small network of quadrotors with cameras to first map the space, prior to computing a control policy. Human operators then label the resulting map to capture the properties expressed in the specification. This process is known as grounding. Afterwards, our algorithm generates a feedback control policy to satisfy the temporal and uncertainty constraints encoded in the specification. With a map image and ground robot control policy, one quadrotor tracks and monitors the ground robot, providing it with pose information that it uses to execute the mission. This work also considers the cooperation between ground and air vehicles and leverages their heterogeneous capabilities to jointly carry out a mission. While other research exists for cooperation among mixed teams of ground and air vehicles, existing research assumes the presence of GPS on either the ground vehicles [3] or on the aerial vehicles [1, 4]. We, on the other hand, assume the robots are working in an environment with no external positioning framework whatsoever. Other work that has focused on planning without GPS, such as [5], uses the visual capabilities of an aerial vehicle to enhance a map built by a ground vehicle. In this work, we assume the map is built by a team of aerial vehicles using their high vantage point so that the ground vehicle can perform a specific task based on the resulting map. Further, unlike these works, in our work, the mission to be carried out is specified using GDTL, allowing for the encoding of much more complex missions, including specifying the uncertainty of the ground vehicle’s localization. Map building and localization in unknown environments could be formulated as in SLAM [6], where a robot uses its onboard sensor data—perhaps only vision [7]—to refine an estimation of its pose while building a map of the environment. Unfortunately, these algorithms are typically computationally demanding and require one or more sensing technologies which may not be feasible to include on a ground robot due to cost, weight, or hardware limitations. Using vision-based solutions from aerial cameras, on the other hand, allows for accurate pose estimation in complicated environments while only employing cheap, readily-available RGB cameras. For example, homography-based visual servoing methods provide accurate localization with only the use of camera data [8]. In this work, we make use of homography-based consensus control methods [9] for the aerial vehicles to build a mosaic map, and monitor the ground robot with a Position-Based Visual Servoing (PBVS) control method designed to keep the robot in the field of view at all times while guaranteeing sufficient overlap with the map. 2 Technical Approach We propose an end-to-end framework (see Fig. 1) that includes a specialized, two-wheeled ground robot and a team of aerial robots, i.e., N quadrotors, each equipped with a downward facing camera and an altimeter. The team of quadrotors are first responsible for building the map of the unknown environment using their onboard camera images. Then the ground robot operates under the computed optimal control policy with the measurements provided by a single quadrotor tracking it from above. The entire framework is divided into three sequential phases that include the following: 1. 1. Generate a mosaic map image of the unknown environment using purely vision and homography-based formation control [9] with multiple quadrotors.   2. 2. Label the generated map and define the mission specification (to be completed by human operator) and then automatically synthesize a satisfying control policy for ground robot using GDTL-Feedback Information RoadMaps, or GDTL-FIRM [2].   3. 3. Simultaneously track and localize the ground robot with a single aerial vehicle using a homography-based pose estimation and position-based visual servoing control method.   [] Fig. 1. The proposed framework includes three major components: (1) mapping in unknown environments, (2) control synthesis, and (3) online tracking and localization of a ground robot. 2.1 Inter-image Homography Map building and ground robot pose estimation rely on the inter-image homography, [$$\mathbf {H}\_{ij} \in \mathbb {R}^{3 \times 3}$$], which defines the linear transformation between co-planar three-dimensional (3D) points described in two different coordinate frames, i.e., [$$\mathbf {P}\_i = \mathbf {H}\_{ij} \mathbf {P}\_j$$], where [$$\mathbf {P}\_i \in \mathbb {R}^3$$] and [$$\mathbf {P}\_j \in \mathbb {R}^3$$]. The perspective projection of these 3D points yields the measured image features, [$$\mathbf {p}\_i \in \mathbb {R}^2$$] and [$$\mathbf {p}\_j \in \mathbb {R}^2$$], that are given by the cameras i and j, respectively. These two image features are related by the following homography, [$$\mathbf {p}\_i = \tilde{\mathbf {H}}\_{ij}\mathbf {p}\_j$$], where [$$\tilde{\mathbf {H}}\_{ij} = \mathbf {K}\mathbf {H}\_{ij} \mathbf {K}^{-1}$$] is estimated using standard least squares estimation [10] with at least four matched image feature points, and [$$\mathbf {K}$$] is the known calibration matrix of the identical cameras. In this work, we assume that all quadrotors are flying at a sufficiently high altitude to justify the co-planar requirements of points on the ground. Further, we assume that the cameras are always parallel to the ground – as with a hovering quadrotor. In this case, the rectified homography describes the transformation between two parallel, calibrated camera poses, [$$\begin{aligned} \mathbf {H}\_{ij}^r =\left[ \displaystyle \begin{array}{ccc} \cos (\psi \_{ij}) &{} -\sin (\psi \_{ij}) &{} -\frac{x\_{ij}}{z\_j} \\ \sin (\psi \_{ij}) &{} \cos (\psi \_{ij}) &{} -\frac{y\_{ij}}{z\_j} \\ 0 &{} 0 &{} 1-\frac{z\_{ij}}{z\_j} \\ \end{array} \right] \ , \end{aligned}$$] (1) where [$$[x\_{ij},y\_{ij},z\_{ij}, \psi \_{ij}]^T \in \mathbb {R}^4$$] is the estimated parallel pose of camera j in the frame of camera i. In practice, we guarantee the parallel camera assumption by removing the roll and pitch effect of a translating quadrotor from the acquired image, i.e., [$$\mathbf {H}\_{ij}^r = \mathbf {R}\_{\theta \_i}\mathbf {R}\_{\phi \_i}\mathbf {K}^{-1} \tilde{\mathbf {H}}\_{ij} \mathbf {K}\mathbf {R}\_{\phi \_j}^T\mathbf {R}\_{\theta \_j}^T$$], given the roll, [$$\phi $$], and pitch, [$$\theta $$], of each quadrotor. We extract the relative position from the last column of [$$\mathbf {H}\_{ij}^r$$], given the altitude of the cameras provided by the altimeter, and the relative orientation from the upper 2 [$$\times $$] 2 block of [$$\mathbf {H}\_{ij}^r$$]. 2.2 Homography-Based Formation Control Homography-based formation control [9] drives the team of quadrotors that generates the high fidelity mosaic map image, which is a composite image of the quadrotors’ onboard images while in formation. The consensus-based kinematic control laws that drive the formation of quadrotors to their desired relative pose, [$$[x\_{i,j}^\*, y\_{i,j}^\*, \psi \_{i,j}^\*]^T$$], are functions of the computed rectified homography from Eq. (1), i.e., [$$\begin{aligned} w\_{z\_i} = K\_w\sum \_{j\in \mathcal {N}\_i} \left( \arctan \left[ \frac{[\mathbf {H}\_{ij}^r]\_{21}}{[\mathbf {H}\_{ij}^r]\_{11}}\right] - \psi \_{ij}^\*\right) , \end{aligned}$$] (2) [$$\begin{aligned} \left[ \begin{array}{cc} v\_{x\_i}\\ v\_{y\_i} \end{array} \right] = K\_v\sum \_{j\in \mathcal {N}\_i} \left( \left[ \begin{array}{cc} \left[ \mathbf {H}\_{ij}^r\right] \_{13} \\ \left[ \mathbf {H}\_{ij}^r\right] \_{23} \end{array} \right] -\left[ \begin{array}{cc} x\_{ij}^\* \\ y\_{ij}^\* \end{array} \right] \right) , \end{aligned}$$] (3) [$$\begin{aligned} v\_{z\_i} = K\_v\sum \_{j\in \mathcal {N}\_i} \left( 1-[\mathbf {H}\_{ij}^r]\_{33}\right) , \end{aligned}$$] (4) where [$$[v\_{x\_i},v\_{y\_i},v\_{z\_i}]^T$$] is the translational velocity control and [$$w\_{z\_i}$$] is the rotational velocity control about the z-axis of the quadrotor, i.e., its yaw. Note that the element in row a and column b of [$$\mathbf {H}\_{ij}^r$$] is denoted by [$$[\mathbf {H}\_{ij}^r]\_{ab}$$]. The relative yaw does not affect [$$z\_{ij}$$], therefore, the relative altitude can be controlled towards zero using [$$[\mathbf {H}\_{ij}^r]\_{33}$$]. The team produces the mosiac map of the environment when the quadrotors reach the chosen formation that yields sufficient image overlap for accurate pose estimation and large enough field of view to cover the region of interest in the environment. It is worth noting that this component of our solution framework could be omitted if given a high resolution map, such as a satellite image. 2.3 GDTL-FIRM The GDTL-FIRM algorithm computes the optimal control action for the ground robot under a GDTL specification given that the previously computed map has been labeled and the specification has been provided. We assume that a human operator labels the map built by the aerial vehicles. Alternately, this labeling could be accomplished automatically by a segmentation and classification algorithm. We utilize the work of [2] to synthesize the control polices for the ground robot with temporal and uncertainty constraints. In brief, the sampling-based algorithm generates a transition system in the belief space and uses local feedback controllers to break the curse of history associated with belief space planning. The algorithm is based on Feedback Information RoadMaps (FIRMs), where points are sampled directly in the state space and feedback controllers are used to stabilize the system about nodes in the roadmap, thus inducing a roadmap in the belief space. A product Markov Decision Process (MDP) between the transition system and the Rabin automaton encoding the GDTL task specification is used to compute the switching control policies. Finally, the MDP is queried for the existence of satisfying control policies of high enough probability. 2.4 Robot Tracking and Localization The ground robot executes its mission in the environment by traversing the transition system generated in the previous phase while employing an Extended Kalman Filter (EKF) to estimate its position with measurements provided by the dedicated aerial vehicle. A localization marker on the ground robot includes two distinctly colored patches that aid in estimating its planar position and orientation in the environment frame. During localization, the quadrotor first localizes the centroid of each patch in the quadrotor’s image frame as two image features, [$$(\mathbf {p}\_1^q,\mathbf {p}\_2^q)$$]. The quadrotor simultaneously calculates the rectified homography between the quadrotor’s image frame (q) and the mosaic map image frame (m), i.e., [$$\mathbf {H}\_{qm}^r$$], to estimate the relative pose between the quadrotor and the map. The quadrotor projects the robot’s pose in the image frame [$$(\mathbf {p}\_1^q,\mathbf {p}\_2^q)$$] to the map frame [$$(\mathbf {p}\_1^m,\mathbf {p}\_2^m)$$] using [$$\mathbf {H}\_{qm}^r$$]. The quadrotor finally computes the ground robot’s final pose in the environment frame (e), given by [$$(x,y,\theta )$$], by linearly interpolating [$$(\mathbf {p}\_1^m,\mathbf {p}\_2^m)$$] with the dimensions of the map image – in pixels – and the known dimensions of the environment – measured in meters. The centroid of the projected features yields the position, (x, y), while the orientation, [$$\theta $$], is calculated using the line that connects the two projected features. Meanwhile, a 2D kinematic PBVS controller maneuvers the aerial robot to track the ground robot while simultaneously keeping sufficient overlap with the mosaic map image for an accurate homography estimation. Recall that the field-of-view of the individual cameras is not sufficient to view the entire environment, hence the requirement for the composite map image. Homography-based control drives the quadrotor into a desired position above the environment that is defined by the estimated position of the ground robot, (x, y). The quadrotor’s position is further constrained to a rectangle, [$$R = [x\_{min}, x\_{max}] \times [y\_{min}, y\_{max}]$$], where the boundaries of R affect the amount of desired overlap with the mosaic image. For example, setting the boundaries equal to the dimensions of the environment will drive the quadrotor directly over the ground robot, thus degrading the homography estimate when hovering near the environment’s edges. Conversely, setting the boundaries equal to zero would keep the quadrotor coincident with the mosaic image frame and will lose coverage when the ground robot is near the edge of the environment. The ideal boundary values for a downward facing camera allows the camera to move just far enough to see the entire environment, i.e., [$$\begin{aligned} \left[ \begin{array}{c} x\_{min} \\ y\_{min} \end{array}\right] = - \left[ \begin{array}{c} x\_{max} \\ y\_{max} \end{array}\right] = \left[ \begin{array}{c} \frac{w\_{e}-{}^{e}w\_{q}}{2} \\ \frac{h\_{e}-{}^{e}h\_{q}}{2} \end{array}\right] \, , \end{aligned}$$] (5) where [$$(w\_{e},h\_{e})$$] are the width and height of the environment in meters, [$$({}^{e}w\_{q},{}^{e}h\_{q})$$] are the dimensions of the quadrotor’s image frame, [$$(w\_{q},h\_{q})$$], after being projected into the environment frame. This projection is computed as, [$$\begin{aligned} \left[ \begin{array}{c} {}^{e}w\_{q} \\ {}^{e}h\_{q} \\ A \end{array}\right] = A\mathbf {K}^{-1} \left[ \begin{array}{c} w\_{q} \\ h\_{q} \\ 1 \end{array}\right] \, , \end{aligned}$$] (6) given the camera’s altitude, A, and camera calibration matrix, [$$\mathbf {K}$$]. The ideal rectangle size for our camera (640 [$$\times $$] 360) at the desired experiment altitude of 1.8 m is approximately [$$0.85\times 1.45$$] m. Unfortunately, our camera is not downward-facing, therefore we expand R to [$$0.85\times 2.0$$] m to ensure proper coverage. Finally, we introduce an optional offset, [$$\mathbf {x}\_{offset}$$], that measures the center of mosaic map image’s virtual position in space with respect to the quadrotor’s frame. We use an offset 0.75 m in the positive x-direction of the local quadrotor frame (see Fig. 2) to account for the forward-facing camera. The final controller is similar to the homography-based formation controller in Sect. 2.2. In fact, the yaw controller of Eq. (2) and the altitude controller of Eq. (4) remain the same with a desired relative pose equal to zero. The planar control vector is calculated as the following, [$$\begin{aligned} \left[ \begin{array}{c} v\_{x} \\ v\_{y} \end{array}\right] = K\_v \left( \left[ \begin{array}{c} \left[ \mathbf {H}\_{qm}^r\right] \_{13} \\ \left[ \mathbf {H}\_{qm}^r\right] \_{23} \end{array}\right] - \left[ \begin{array}{c} linint(x,(0,w\_{e}),(x\_{min},x\_{max})) - x\_{offset} \\ linint(y,(0,h\_{e}),(y\_{min},y\_{max})) - y\_{offset} \end{array}\right] \right) \, , \end{aligned}$$] (7) where [$$linint(\cdot )$$] is the linear interpolation function that transforms the ground robot’s environmental position into the quadrotor’s desired position within R. [] Fig. 2. Coordinate frame definitions for the PBVS controller from Eq. (7) include the: environment frame, mosaic map image frame, quadrotor image frame, mosaic map frame center, and quadrotor frame. The quadrotor estimates the ground robot’s pose [$$(x,y,\theta )$$] by transforming the pose in the quadrotor image frame to the environment frame. The quadrotor manuevers within R based on the ground robots’s pose in the environment frame. The quadrotor local frame and mosaic map frame center are defined with the same orientation as the environment frame. 3 Results and Experiments We validate all three phases of this framework by executing a complete mission experiment with a heterogeneous team of autonomous robots. The phases are completed in the order specified in Sect. 2 due to the dependence on the results from previous phases. We first detail our map building results with a mosaic map that is generated using the homography-based formation control and two quadrotors with cameras that do not have access to GPS. GDTL-FIRM synthesizes the control policy for a ground robot with nonlinear unicycle dynamics in the environment for a GDTL specification over belief states associated with the measurement of the robot’s position. Finally, a quadrotor successfully tracks and localizes the ground robot while it completes the previously defined mission. 3.1 Experimental Setup We perform experiments in the Boston University Robotics Laboratory. We use a map of Boston University’s campus, located in Boston, MA, USA, that includes parts of Charles River, Massachusetts Turnpile, Fenway Stadium, and BU Central campus. We utilize the real landmarks in this map to formulate our specification. This map is chosen because it has sufficient detail and texture to allow for adequate feature matching (e.g., white buildings at the bottom of the map) as well other minimal feature regions (e.g., the Charles River). The physical map is printed on a 12 [$$\times $$] 16 ft[$$^2$$] vinyl banner. We utilize an Optitrack motion capture system¹ for obtaining ground truth measurements. The ground robot is a two-wheeled DrRobot X80Pro² with no onboard sensing. We fit the ground robot with an identifying marker composed of two uniquely colored patches in the YUV color space for planar position and orientation localization (see Fig. 5). Parrot Bebop quadrotors³ are the aerial vehicles used for map building, and later, tracking. The Bebop is an off-the-shelf quadrotor platform with a suite of sensors that include an Inertial Measurement Unit (IMU), a downward-facing pinhole camera for optical flow stabilization, an ultrasonic sensor for altitude measurements, and a 180[$$^{\circ }$$] wide-angle 14 megapixel forward-facing camera. The large forward-facing camera produces a [$$640\times 360$$] pixel stabilized video feed that can be ‘steered’ within the field-of-view of the wide-angle lens to produce a ‘virtual camera’ video feed. We position the virtual camera at the maximum angle of [$$\theta \_{bebop}$$] measured about the y-axis of the quadrotor (see Fig. 3(a)), where [$$\theta \_{bebop}\approx 50^{\circ }$$], and rectify the image for this angle. The Robot Operating System (ROS) [11] handles all communication on a local area network via Wi-Fi. We control the quadrotors from a base station computer running the ROS Bebop Autonomy package [12] which incorporates Parrot’s open-source SDK. The computer also acquires and processes image frames from the quadrotors’ real-time video stream via the OpenCV libraries [13]. Independent ROS nodes handle the individual quadrotors for the formation flight, demonstrating the distributed control. Independent ROS nodes also handle the quadrotor and ground robot control during the tracking phase. In this phase, separate quadrotor nodes handle the image processing for robot localization, pose estimation via homography, and the control. The ground robot node executes the local control and EKF estimation of the ground robot given its pose estimate and nonlinear dynamics. All vision computations are performed on an Ubuntu 14.04 machine with an Intel Core i7 CPU at 2.4 GHz and 8GB RAM. 3.2 Formation Control and Map Generation We utilize a team of two quadrotors to reach a desired formation where, [$$y\_{1,2}^\* = - y\_{2,1}^\* = 1.2$$] m, and all other desired relative poses are set to zero (see Fig. 3(a)). This formation is carefully chosen because it ensures the pair of aerial cameras have enough overlap for accurate relative pose estimation while guaranteeing a complete view of the environment. All quadrotors are flown to a desired height of 1.8 m. The quadrotors reach the desired formation (Fig. 3(c)) from the initial conditions (Fig. 3(b)) in approximately 15 s. From this point, the user has the ability to control one vehicle in the formation to fine tune the result of the online mosaic map, which is displayed at approximately 2.5 Hz. In this experiment, the operator maneuvers quadrotor 1 until the left edge of the map is completely visible and then releases it to autonomous control again. Meanwhile, the formation control law in Sect. 2.2 controls quadrotor 2. The onboard images at the final desired formation (Figs. 3(d)–(e)) were used to generate the final mosaic map image shown in Fig. 3(f). [] Fig. 3. Final mosaic map result using the homography-based formation control method. Note that quadrotor and camera coordinate systems are only labeled once in (a) for clarity. 3.3 GDTL-FIRM The specification for the ground robot is encoded with GDTL and is given as the following: “Always avoid all obstacles, i.e., Charles river and Massachusetts Turnpike. Always eventually visit Kenmore Square, Marsh Plaza, Audubon Circle, and Fenway Stadium. From Kenmore Square or Marsh Plaza, Bridge2 (St Mary’s St) can not be used to visit Audubon Circle or Fenway Stadium. From Audubon Circle or Fenway Stadium, Bridge1 (Beacon Ave or Brookline Ave) can not be used to visit Kenmore Square or Marsh Plaza. Always keep uncertainty about the robot’s pose below 0.9 m[$$^2$$], and on bridges, the uncertainty must be below 0.6 m[$$^2$$], where uncertainty is measured as the trace of the estimation pose covariance matrix.” Fig. 4(a) shows the resulting transition system and control policy, computed by the algorithm from [2]. The transition system has 35 nodes and 226 edges while the product automaton has 316 nodes and 3274 edges. The algorithm executed in approximately 62.24 s. [] Fig. 4. FIRM-GDTL results plotted over the ground truth environment image. (a) shows the transition system in white and the policy in orange. (b) shows the ground truth in green, the measurement in yellow, the estimated pose in red, and the covariance ellipses in blue. (c) shows the ground truth in green for all runs. (d) shows the covariance for all runs. The spikes in covariance indicate the beginning of a new run after a quadrotor battery had been replaced. We initialize the covariance to an arbitrarily large value at time step 0 that drastically decreases with the first pose measurement from the quadrotor at time step 1. 3.4 Pose Estimation and Mission Execution The ground robot executes the mission using the previous control policy and quadrotor for localization. Initially, the quadrotor takes off from a position where the camera’s field of view is facing towards the ground robot. The homography-based localization and quadrotor control (Sect. 2.4) begin once the ground robot’s marker has been detected. The ground robot localization estimates update at approximately 3.5 Hz. We show an example of the robot tracking and pose estimation for three time steps in Fig. 5. It is clear that the control method tracks the ground robot during its route with enough image resolution to detect the robot’s patches and also maintains the required overlap with the mosaic map image. Figure 5 also illustrates the final pose estimation in the mosaic map frame. It is important to note that the ground robot sits 0.2 m above the map, therefore projecting the image features of the ground robot’s marker directly into the map frame would add significant error to the final estimation. The image features are instead offset to the map plane before projecting the features to the mosaic map to satisfy the homography’s planar assumption. We determine this offset by measuring the pose estimation error at the extremes of the map and interpolating for the correction as a function of the estimated pose. We ran the mission five times due to the limitations of the quadrotor battery, yielding ten complete laps of the environment and four partial laps, all of which satisfied the GDTL specification. We show an example run of 2.5 laps in Fig. 4(b) that displays the ground robot’s ground truth pose, estimated pose, measured pose, and uncertainty. We check for satisfaction by inspecting the ground truth of all experimental runs to ensure the robot has reached each region appropriately while avoiding obstacles (Fig. 4(c)). Moreover, the covariance of the robot’s estimate for all experimental runs is safely below the minimum 0.6 requirement, thus satisfying the specification (Fig. 4(d)). [] Fig. 5. Pose estimation results of live tracking and localization are shown in (a, c, e) with onboard images (left) and the mosaic map image (right). The corresponding top views of the experiment is shown in (b, d, f), respectively. The image matches and pose estimations are drawn for visualization purposes. 4 Conclusion The main experimental insight gained from this work is how to feasibly break the dependence on external positioning information while controlling robots under TL specifications. Specifically, we are interested in studying the satisfaction of GDTL specifications by (ground) robots operating under uncertainty. Encoding specifications with GDTL is advantageous because it defines performance goals for the uncertainty of the system, allowing us to complete high-level missions under noisy measurements. This work also gives insight into the formulation of a mobile vision-based sensing method for control under TL specifications. Another technical insight stems from the effects of using off-the-shelf equipment in this framework since airborne cameras are a cheap, light weight sensor solution that allow for high fidelity 3D pose estimation. We show that inexpensive and widely available ground and aerial robots can be used to perform complex missions with TL and uncertainty constraints, therefore adding flexibility in future applications. Moreover, we consider a simple dynamic sensor that is far more reconfigurable than a fixed-camera network alternative. The experimental setup for vision-based control with aerial vehicles also provided valuable experimental insight. The lighting conditions of the flying space proved to be critical and had to be carefully modified to reduce glare from the reflective vinyl banner material. The oblique angle of the quadrotor’s onboard camera also complicated the control strategies since we could not rely on the standard down-ward facing camera assumptions. Lastly, this vision-based technique does encounter pose estimation innacuracies when the quadrotor cameras have very poor resolution compared to the map. Further, the entire pipeline depends on the success of feature matching that encounters problems at drastic resolution differences. However, these experiments show that our framework is well suited for remote outdoor scenarios where aerial vehicles or satellite imagery could serve as the map and only camera-outfitted aerial vehicles are required for localization. Acknowledgements E. Cristofalo was supported in part by the 2015 National Defense Science and Engineering Graduate (NDSEG) fellowship. This work was also supported by US grants NSF CNS-1330008, NSF IIS-1350904, NSF NRI-1426907, NSF CMMI-1400167, ONR N00014-12-1-1000, and Spanish projects DPI2015-69376-R (MINECO/FEDER) and SIRENA (CUD2013-05). We are grateful for this support. References 1. Hsieh, M.A., Cowley, A., Keller, J.F., Chaimowicz, L., Grocholsky, B., Kumar, V., Taylor, C.J., Endo, Y., Arkin, R.C., Jung, B., et al.: Adaptive teams of autonomous aerial and ground robots for situational awareness. J. Field Robot. 24(11–12), 991–1014 (2007)CrossRef 2. Vasile, C.I., Leahy, K., Cristofalo, E., Jones, A., Schwager, M., Belta, C.: Control in belief space with temporal logic specifications. In: Proceedings of the 2016 Conference on Decision and Control (CDC). IEEE (2016, to appear) 3. Vaughan, R.T., Sukhatme, G.S., Mesa-Martinez, F.J., Montgomery, J.F.: Fly spy: lightweight localization and target tracking for cooperating air and ground robots. In: Distributed Autonomous Robotic Systems 4, pp. 315–324. Springer, Japan (2000) 4. Grocholsky, B., Keller, J., Kumar, V., Pappas, G.: Cooperative air and ground surveillance. Robot. Autom. Mag. 13(3), 16–25 (2006)CrossRef 5. Forster, C., Pizzoli, M., Scaramuzza, D.: Air-ground localization and map augmentation using monocular dense reconstruction. In: Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3971–3978. IEEE (2013) 6. Thrun, S., Leonard, J.J.: Simultaneous localization and mapping. In: Springer Handbook of Robotics, pp. 871–889. Springer, Heidelberg (2008) 7. Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: Proceedings of the 2011 International Conference on Computer Vision (ICCV), pp. 2320–2327. IEEE (2011) 8. Benhimane, S., Malis, E.: Homography-based 2d visual servoing. In: Proceedings of the 2006 International Conference on Robotics and Automation (ICRA), pp. 2397–2402. IEEE (2006) 9. Montijano, E., Cristofalo, E., Zhou, D., Schwager, M., Sagues, C.: Vision-based distributed formation control without an external positioning system. Trans. Robot. 32(1), 339–351 (2016)CrossRef 10. Ma, Y., Soatto, S., Kosecka, J., Sastry, S.S.: An Invitation to 3-d Vision: From Images to Geometric Models, vol. 26. Springer Science & Business Media, New York (2012) 11. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3, p. 5 (2009) 12. Monajjemi, M.: Bebop autonomy (2015). https://​github.​com/​AutonomyLab/​bebop\_​autonomy 13. Bradski, G., et al.: The opencv library. Doctor Dobbs J. 25(11), 120–126 (2000) Footnotes 1 Natural Point Optitrack: https://​www.​optitrack.​com.   2 DrRobot X80Pro: http://​www.​drrobot.​com/​products\_​item.​asp?​itemNumber=​x80pro.   3 Parrot Bebop: http://​www.​parrot.​com/​products/​bebop-drone/​.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_47 On the VINS Resource-Allocation Problem for a Dual-Camera, Small-Size Quadrotor Kejian J. Wu¹  , Tien Do¹  , Luis C. Carrillo-Arce¹   and Stergios I. Roumeliotis¹   (1) MARS Lab, University of Minnesota, Minneapolis, Minnesota 55455, USA     Kejian J. Wu (Corresponding author) Email: kejian@cs.umn.edu   Tien Do Email: doxxx104@umn.edu   Luis C. Carrillo-Arce Email: carrillo@umn.edu   Stergios I. Roumeliotis Email: stergios@cs.umn.edu Abstract In this paper, we present a novel resource-allocation problem formulation for vision-aided inertial navigation systems (VINS) for efficiently localizing micro aerial vehicles equipped with two cameras pointing at different directions. Specifically, based on the quadrotor’s current speed and median distances to the features, the proposed algorithm efficiently distributes processing resources between the two cameras by maximizing the expected information gain from their observations. Experiments confirm that our resource-allocation scheme outperforms alternative naive approaches in achieving significantly higher VINS positioning accuracy when tested onboard quadrotors with severely limited processing resources. This work was supported by the Air Force Office of Scientific Research (FA9550-10-1-0567) and the National Science Foundation (IIS-1111638). 1 Introduction and Related Work In order for micro aerial vehicles (MAVs) to autonomously navigate within GPS-denied areas (e.g., indoors), they need to reliably and efficiently estimate their 3D position and orientation (pose) based on onboard measurements from small-size, lightweight sensors such as cameras and inertial measurement units (IMUs). Previous work on vision-aided inertial navigation systems (VINS) for quadrotors has primarily considered either forward or downward-pointing cameras in conjunction with an IMU. In [10], for example, a single forward-facing camera is employed for performing visual-inertial odometry, while in [15] a stereo pair mounted in front of the quadrotor is used for localizing. On the other hand, [5, 16] focus on efficiently fusing point-feature observations from a downward-pointing camera with inertial measurements, while [6] combines optical flow with IMU data for estimating the vehicle’s linear and rotational velocity. Many quadrotors, however, (e.g., Parrot’s Bebop) are equipped with multiple cameras pointing at different directions (see Fig. 1). In such cases, the downward camera is typically used for optical-flow estimation and position stabilization, while the forward one is often employed for pose determination and navigation (e.g., [3]). As shown in [12], combining visual observations from two or more cameras spanning different viewing directions can lead to significant motion-information gains. Such systems, comprising two stereo pairs, have been employed for determining a quadrotor’s pose using all feature observations jointly [7], or separately [14] for first estimating each stereo-pair’s pose and then combining their estimates for computing the vehicle’s pose. A quadrotor localization system that employs two monocular, differently-pointing cameras is that of [4]. In particular, the optical flow from the downward camera along with the altimeter data are used for estimating the horizontal velocity and resolving the scene’s scale. This information is then combined with the attitude estimates from the IMU and image data from the front camera to perform (on a remote laptop) PTAM-based localization [8] along short paths ([$$\sim $$]16 m). [] Fig. 1. The Bebop cameras’ fov. Besides the lack of a VINS that directly combines observations from two cameras with different viewing directions for estimating the pose of a MAV, very little is known about how to optimize the information gain from each camera. In particular, most prior work on feature selection for improving localization accuracy has considered one or two cameras pointing in the same direction (e.g., [2, 9, 18]). Moreover, existing approaches, although they solve a relaxed version of the computationally-intractable optimal problem, their processing requirements often exceed the computational resources of small-size quadrotors. To address these limitations, the main contributions of this work are as follows: - We introduce a novel resource-allocation problem formulation, which considers the vehicle’s current speed and median distance to the features detected by each camera, as well as approximate models of the impact that each of these parameters has on the expected information gain, so as to efficiently distribute processing resources between the two cameras’ observations. - We extend the square-root inverse sliding window filter (SR-ISWF) of [17] to process visual observations from both cameras of the Bebop quadrotor.¹ - We experimentally validate our proposed, highly efficient, resource-allocation scheme and demonstrate that it allows the VINS algorithm to achieve superior positioning accuracy as compared to alternative approaches (using only one of the two cameras, or processing the same number of features from both of them), while operating in real-time and well within ([$$\sim $$]50% of the CPU time) the severely limited computational resources of the Bebop quadrotor. 2 Technical Approach Assume that a quadrotor is equipped with two cameras, namely a forward-facing camera, [$$\{C\_f\}$$], and a downward-pointing camera, [$$\{C\_d\}$$].² Features are extracted and tracked across image sequences for each camera independently. Selecting the most informative features for localization (i.e., so as to minimize the posterior covariance of the quadrotor’s pose estimates), would require solving the following optimization problem: [$$\begin{aligned}&\min \limits \_{u\_i,v\_j\in \{0,1\}} \ \text {tr}\left( {\mathbf {P}^{\ominus }}^{-1} + \sum \_{i\in \mathcal {S}\_f}u\_i \frac{1}{\sigma ^{2}\_{f}} \mathbf {H}\_i^T\mathbf {H}\_i + \sum \_{j\in \mathcal {S}\_d}v\_j \frac{1}{\sigma ^{2}\_{d}} \mathbf {H}\_j^T\mathbf {H}\_j\right) ^{-1} \\&\text {s.t.} \sum \_{i\in \mathcal {S}\_f}u\_i + \sum \_{j\in \mathcal {S}\_d}v\_j \le \gamma \nonumber \end{aligned}$$] (1) where [$$\mathbf {P}^{\ominus }$$] denotes the prior covariance of the quadrotor’s pose estimates, [$$\mathbf {H}\_i$$] is the feature i’s measurement Jacobian with respect to the pose state, [$$\sigma \_f$$] and [$$\sigma \_d$$] are the measurement noise standard deviations, [$$\mathcal {S}\_f$$] and [$$\mathcal {S}\_d$$] are the sets of features observed by [$$C\_f$$] and [$$C\_d$$], respectively, and [$$\gamma $$] is the maximum number of features that can be processed at each time step. Since (1) is an integer programming problem with NP complexity, prior approaches (e.g., [1]) relax it by ignoring [$$\mathbf {P}^\ominus $$] and allowing [$$u\_i, v\_j\in [0\ 1]$$] so that it becomes convex and can be cast as a semidefinite program (SDP). Its cost, however, remains prohibitively high [$$O((|\mathcal {S}\_f|+|\mathcal {S}\_d|)^3)$$]. Although alternative approximate formulations achieve lower complexity ([$$O((|\mathcal {S}\_f|+|\mathcal {S}\_d|)^2)$$] for [2] and [$$O(\gamma (|\mathcal {S}\_f|+|\mathcal {S}\_d|))$$] for [18]), their processing requirements are still quite high for the Bebop’s limited resources. For this reason, in this work we introduce further approximations to (1) so as to derive a constant-cost solution. Specifically, in order to avoid explicitly evaluating each feature’s measurement Jacobian, we focus on the expected value of the original cost function in (1), over a particular distribution (to be specified later on) of the positions of the features viewed by each camera: [$$\begin{aligned} \mathbb {C}(\lambda ) = \ \mathbb {E} \left[ \text {tr}\left( \lambda \frac{1}{\sigma ^{2}\_{f}} \mathbf {H}\_i^T\mathbf {H}\_i + (1-\lambda )\frac{1}{\sigma ^{2}\_{d}} \mathbf {H}\_j^T\mathbf {H}\_j\right) ^{-1} \right] , \ \ \ \lambda \in [0\ 1]. \end{aligned}$$] (2) Furthermore, by employing Jensen’s inequality and the fact that the function [$$\text {tr} (\mathbf {X}^{-1})$$] is convex [1], it is straightforward to show that [$$\mathbb {C}(\lambda )$$] in (2) has the following lower bound: [$$\begin{aligned} \mathbb {C}\_{lb}(\lambda )&= \text {tr}\left( \mathbb {E} \left[ \lambda \frac{1}{\sigma ^{2}\_{f}} \mathbf {H}\_i^T\mathbf {H}\_i + (1-\lambda )\frac{1}{\sigma ^{2}\_{d}} \mathbf {H}\_j^T\mathbf {H}\_j \right] \right) ^{-1} \nonumber \\&= \text {tr}\left( \lambda \mathbb {E} \left[ \frac{1}{\sigma ^{2}\_{f}} \mathbf {H}\_i^T\mathbf {H}\_i \right] + (1-\lambda ) \mathbb {E} \left[ \frac{1}{\sigma ^{2}\_{d}} \mathbf {H}\_j^T\mathbf {H}\_j \right] \right) ^{-1} \end{aligned}$$] (3) By defining the expected information gain of a feature measurement, from the forward or downward-pointing camera, with respect to the pose state as: [] (4) and substituting into the relaxed cost function [$$\mathbb {C}\_{lb}(\lambda )$$] in (3), our proposed optimization problem can be written as: [] (5) which represents a resource-allocation problem between the two cameras. Note that, once the optimal percentage of resources is allocated to each camera (i.e., the optimal value [$$\lambda ^{\*}$$] is obtained), we then select features within each camera by employing the approach of [9]; i.e., we enforce uniform feature extraction during image processing and select the ones with the longest tracks. As compared to (1), the relaxed optimization problem in (5) has only one scalar variable [$$\lambda $$]. Furthermore, since the matrices [] and [] have a fixed size and can be efficiently computed, (5) can be solved in constant time that only depends on the matrices’ size, regardless of the number of features available from each camera. Thus, and in order to reduce complexity, we first assume that the features’ positions can be accurately triangulated from their first two observations, and then used for localizing the rest of the camera poses in the estimator’s optimization window [17]. Moreover, we ignore the cameras’ orientation, i.e., their (i) roll and pitch, as they are observable and can be precisely estimated [typically, with root mean square error (RMSE) of 0.1[$$^\circ $$]], and (ii) yaw, as its impact over a short time horizon (i.e., the 1 s corresponding to the remaining 4 poses in the estimator’s sliding window) is very small for any error, due to the gyro noise, to become significant. As a result of these relaxations, the measurement Jacobian [$$\mathbf {H}\_i$$] is now determined with respect to only the downward-camera’s position state,³ and hence the size of the information matrices [] and [] becomes 3-by-3. Based on these approximations, in what follows, we present a closed-form expression for evaluating these two matrices. In order to compute the expected information gain from each feature measurement, as defined in (4), we introduce certain simplifying assumptions about the spatial distribution of the features observed by the two cameras. We start by parameterizing every feature i with respect to the camera s, where [$$s\in \{f,d\}$$] is the camera index, by its spherical coordinates (the azimuth angle [$$\phi \_i$$], the polar angle [$$\theta \_i$$], and the distance [$$\rho \_i$$]). Assuming that all features are (i) located on a spherical cap of radius equal to the median distance, [$$\rho \_s$$], of the features currently observed, and (ii) uniformly distributed over the angles [$$\phi \_i$$] and [$$\theta \_i$$], i.e., [$$\begin{aligned} \rho \_i = \rho \_s, \ \ \ \ \phi \_i\sim \text {U}[0,2\pi ], \ \ \ \ \theta \_i\sim \text {U}[0,\theta \_{Ms}] \end{aligned}$$] (6) where [$$\theta \_{Ms}$$] equals half of the field of view (fov) of the camera s, it can be shown (see Appendix A) that the expected information gain becomes: [] (7) [] (8) where [$$k'$$] and k denote the time steps when a feature measurement is considered and when it is first observed, respectively, [$${}^{{\scriptscriptstyle {C}}\_d^{k'}}\_{{\scriptscriptstyle {C}}\_d^k}\mathbf {R}$$] represents the rotation matrix between the downward-camera frames corresponding to these two time steps, and [$${}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R}$$] is the extrinsic-calibration rotation matrix between the forward and downward cameras. By substituting (7) into (5), the cost function becomes: [$$\begin{aligned} \mathbb {C}\_{lb}(\lambda )&= \text {tr} \left( \lambda {}^{{\scriptscriptstyle {C}}\_d^{k'}}\_{{\scriptscriptstyle {C}}\_d^k}\mathbf {R}^{{\scriptscriptstyle {T}}}{}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d} \mathbf {R}^{{\scriptscriptstyle {T}}} \mathbf {D}\_f {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R} \ {}^{{\scriptscriptstyle {C}}\_d^{k'}}\_{{\scriptscriptstyle {C}}\_d^k}\mathbf {R} + (1-\lambda ) {}^{{\scriptscriptstyle {C}}\_d^{k'}}\_{{\scriptscriptstyle {C}}\_d^{k}}\mathbf {R}^{{\scriptscriptstyle {T}}} \mathbf {D}\_d {}^{{\scriptscriptstyle {C}}\_d^{k'}}\_{{\scriptscriptstyle {C}}\_d^{k}}\mathbf {R} \right) ^{-1} \nonumber \\&= \text {tr} \left( \lambda {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d} \mathbf {R}^{{\scriptscriptstyle {T}}} \mathbf {D}\_f {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R} + (1-\lambda ) \mathbf {D}\_d \right) ^{-1} = \frac{f(\lambda )}{g(\lambda )} \end{aligned}$$] (9) where [$$f(\lambda )$$] and [$$g(\lambda )$$] are quadratic and cubic polynomial functions, respectively, of [$$\lambda \in [0\ 1]$$]. To minimize (9), we first compute all the stationary points of the unconstrained optimization problem, which requires solving the quartic equation [$$f^{\prime }(\lambda )g(\lambda ) - g^{\prime }(\lambda )f(\lambda ) = 0$$]. Then, the optimal solution [$$\lambda ^{\*}$$] is the one that yields the minimal cost value [$$\mathbb {C}\_{lb}(\lambda ^{\*})$$] among all feasible ([$$\lambda ^{\*} \in [0\ 1]$$]) stationary points, computed in closed form, together with the boundary values 0 and 1. Figure 2 (left) illustrates the optimal values of [$$\lambda ^\*$$] for different median feature distances [$$\rho \_s$$]. As evident, three regions emerge: (I) for [$$\rho \_f/\rho \_d \ge 2$$] and (III) for [$$\rho \_f/\rho \_d \le 1.15$$] where all processing is allocated to the downward or forward camera, respectively, while in region (II) features from both cameras are processed. [] Fig. 2. (left) Optimal resource allocation [$$\lambda ^{\*}$$] for [$$\rho \_{f} \in [0.2\ 5]$$] m and [$$\rho \_{d} \in [1\ 3]$$] m. (right) Average feature track lengths for each camera at different speeds. The blue-solid and red-dashed lines are the fitted linear and quadratic functions [$$\psi \_f$$] and [$$\psi \_d$$], respectively. At this point, we should note that the preceding formulation does not consider the impact of the quadrotor’s motion on the expected information gain. In particular, due to the limited fov and close distance to the ground, the track length of the downward-camera’s features is quite limited as compared to the front one’s. Moreover, reliably tracking features from the downward camera becomes exceedingly difficult as the quadrotor’s speed increases [see Fig. 2 (right)]. To account for the track length’s impact, we modify the cost function in (9) as: [$$\begin{aligned} \mathbb {C}\_{lb}'(\lambda ) = \text {tr} \left( \lambda \psi \_f {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d} \mathbf {R}^{{\scriptscriptstyle {T}}} \mathbf {D}\_f {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R} + (1-\lambda ) \psi \_d \mathbf {D}\_d \right) ^{-1} \end{aligned}$$] (10) where [$$\psi \_f$$] and [$$\psi \_d$$] are the expected feature-track lengths (minus 2, since the first two observations are used for triangulating the feature and do not provide information for localizing the cameras) expressed as functions of the quadrotor’s speed based on prior data [see Fig. 2 (right)]. This modification is motivated by the fact that, in general, the longer a feature track is, the more information it will provide to the sliding-window estimator for determining the camera’s position. Besides this consideration, and due to the fact that the downward-camera’s feature tracking becomes unreliable for speeds higher than 2 m/s, we only consider features from the forward camera during such fast motions. Lastly, once the number of features that will be processed from each camera is determined, i.e., [$$\lambda ^\* \gamma $$] and [$$(1-\lambda ^\*) \gamma $$] for the forward and downward camera, respectively, we employ the method of [9] for selecting the most informative ones within each camera.⁴ 3 Experimental Results To examine the impact of the proposed resource-allocation algorithm on the localization accuracy of VINS, we compared our approach against three naive allocation schemes using as testing platform a MAV. Specifically, the Bebop quadrotor carries an IMU, a 180[$$^\circ $$] fov forward camera with resolution downsampled to [$$300\times 264$$], a 53[$$^\circ $$] fov downward camera with resolution downsampled to [$$320\times 240$$], and a 800 MHz ARM-based dual-core processor. Approximately 200 FAST corners [13] are extracted from the images, and tracked using the Kanade-Lucas-Tomasi (KLT) algorithm [11] across time at a frequency of 15 Hz. The SR-ISWF estimator [17] maintains a sliding window of 6 poses, selected at 5 Hz. For testing our method, we collected two building-scale datasets (path length [$${\sim }200$$] m each) while manually flying the quadrotor at fast speeds (up to 6 m/s) through open spaces, with features far away from the forward camera, as well as during slow motions, including rotations in place, while navigating through narrow passages with nearby scenes. Since the Bebop’s processing resources are quite limited, we allowed the SR-ISWF to process up to 20 features and compared the achieved localization accuracy against the batch least-squares (BLS) estimates (computed offline) for the following configurations: (i) f-only: 20 MSCKF features are used from only the forward camera;⁵ (ii) f-SLAM: 10 MSCKF and 10 SLAM features are used from the forward camera; (iii) fd-EF: resources are equally distributed between the two cameras by fixing the number of MSCKF features processed by each of them to 10 (20 total); and (iv) the proposed fd-D where 20 MSCKF features are dynamically selected from the two cameras. The resource-allocation results of the proposed approach are depicted in Fig. 3, where the optimal [$$\lambda ^{\*}$$] that minimizes the cost function in (10) is plotted, along with the speed of the quadrotor, against time. As evident, our resource-allocation scheme is able to properly adjust to the different motions and scene distances. Specifically, when the quadrotor is flying fast (e.g., during time steps 210–260), only the forward camera is used ([$$\lambda =1$$]) since no features can be reliably tracked across the downward-camera’s images. On the other hand, when the quadrotor navigates through narrow passages with nearby scenes (e.g., many times between time steps 400 and 750), observations from both cameras are used ([$$0<\lambda <1$$]). Lastly, when the quadrotor rotates in place and the scene observed by the forward camera is distant (e.g., many times between time steps 350 and 600), only the downward camera is used ([$$\lambda =0$$]) to maximize the positioning accuracy. [] Fig. 3. The percentage of resources allocated to the forward camera (i.e., optimal value of [$$\lambda $$], shown as black dots) plotted along with the speed of the quadrotor (solid blue line) against time steps, each of duration 0.2 s. [] Fig. 4. Estimated trajectories for three of the four resource-allocation schemes considered against the BLS groundtruth overlaid on the building’s blueprint. Table 1. VINS RMSE for the 4 resource-allocation schemes considered. +-----------+--------+--------+-------+------+ | RMSE (m) | f-only | f-SLAM | fd-EF | fd-D | +:==========+:=======+:=======+:======+:=====+ | Dataset 1 | 2.96 | 2.38 | 2.67 | 1.22 | +-----------+--------+--------+-------+------+ | Dataset 2 | 2.72 | 2.70 | 2.87 | 2.11 | +-----------+--------+--------+-------+------+ In order to assess the impact on the VINS localization accuracy, the root mean square error (RMSE) of the estimated 3D position for each of the four resource-allocation schemes considered is shown in Table 1, while the estimated trajectories for three of them, as well as the BLS groundtruth, overlaid on the building’s blueprint are depicted in Fig. 4. As evident from Table 1, by adjusting the allocation of processing resources based on the vehicle’s speed and the median distance to each camera’s corresponding scene, significant gains in accuracy (0.59–1.16 m lower RMSE) are realized as compared to when using only one of the two cameras, or processing the same number of features from each of them.⁶ This key finding is also visually confirmed by the trajectories shown in Fig. 4 where the one estimated by the SR-ISWF when employing the proposed dynamic resource-allocation scheme best aligns with the BLS groundtruth. Lastly, we note that the dual-camera SR-ISWF runs onboard the Bebop quadrotor and takes less than 100 ms per estimate. Specifically, 6 ms for FAST feature extraction, 36 ms for KLT tracking, 2 ms for RANSAC, and 50 ms for a SR-ISWF update. Since the filter runs at 5 Hz, the overall processing takes [$$\sim $$]500 ms of every second. The remaining processing is reserved for future autonomous navigation tasks such as obstacle detection/avoidance, path planning, and exploration. Videos of the presented experiments can be found at http://​mars.​cs.​umn.​edu/​research/​dual\_​camera\_​quadrotor.​php. 4 Conclusions In this paper, we considered the problem of visual-information selection for efficiently localizing a dual-camera MAV. In particular, instead of addressing the computationally-intractable problem of selecting the most informative feature measurements, we focused on optimally distributing processing resources between the two cameras. To this end, we introduced a novel problem formulation that seeks to maximize the expected information gain based on each camera’s characteristics (fov and noise standard deviation), their geometric configuration, the median features’ distance, and the vehicle’s speed. Moreover, by employing simplifying assumptions about the spatial distribution of the features viewed by each camera, we showed that the optimal solution to the resource-allocation problem can be found in constant time, by solving, in closed form, the quartic equation resulting from the optimality conditions. Our approach was tested experimentally using a small-size quadrotor flying indoors over a wide range of motions and scene distances. In all cases considered, the proposed resource-allocation scheme allowed the VINS algorithm to operate in real time while achieving positioning accuracy superior to that of naive approaches that employ only one of the two cameras, or equally distribute the quadrotor’s processing resources among them. Appendix A In order to compute the expected information matrices in (7), we start by deriving the measurement Jacobian [$$\mathbf {H}\_i$$], appearing in (4), at time step [$$k'$$]. Consider a feature i, observed by the camera s, [$$s\in \{f,d\}$$], whose position, [$$\mathbf {p}\_i$$], with respect to the camera frame [$$\{C\_s^{k'}\}$$], is: [$$\begin{aligned} {}^{{\scriptscriptstyle {C}}\_s^{k'}}\mathbf {p}\_i = \begin{bmatrix} x\_i \\ y\_i \\ z\_i \end{bmatrix} = \begin{bmatrix} \rho \_i\sin \theta \_i\cos \phi \_i \\ \rho \_i\sin \theta \_i\sin \phi \_i \\ \rho \_i\cos \theta \_i \end{bmatrix} \end{aligned}$$] (11) where [$$[x\_i, y\_i, z\_i]^T$$] and [$$[\phi \_i, \theta \_i, \rho \_i]^T$$] are the feature’s Cartesian and spherical coordinates, respectively. The camera measures the perspective projection of feature i: [$$\begin{aligned} \mathbf {z}=\pi \left( {}^{{\scriptscriptstyle {C}}\_s^{k'}}\mathbf {p}\_i\right) + \mathbf {n}\_i = \begin{bmatrix} \frac{x\_i}{z\_i} \\ \frac{y\_i}{z\_i}\end{bmatrix} + \mathbf {n}\_i, \ \ \ \ {}^{{\scriptscriptstyle {C}}\_s^{k'}}\mathbf {p}\_i = {}^{{\scriptscriptstyle {C}}\_s^{k'}}\_{{\scriptscriptstyle {C}}\_s^k}\mathbf {R} ({}^{{\scriptscriptstyle {C}}\_s^{k}}\mathbf {p}\_i - {}^{{\scriptscriptstyle {C}}\_s^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_s^{k'}}) \end{aligned}$$] (12) where [$$\mathbf {n}\_i$$] is the measurement noise and [$${}^{{\scriptscriptstyle {C}}\_s^{k}}\mathbf {p}\_i$$] denotes the feature’s position with respect to the first-observing camera frame, [$$\{C\_s^{k}\}$$], at time step k, while [$${}^{{\scriptscriptstyle {C}}\_s^{k'}}\_{{\scriptscriptstyle {C}}\_s^k}\mathbf {R}$$] and [$${}^{{\scriptscriptstyle {C}}\_s^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_s^{k'}}$$] represent the rotation matrix and translation vector, respectively, between the camera frames at the corresponding time steps k and [$$k'$$]. Based on (12), the measurement Jacobian with respect to the camera’s position is: [$$\begin{aligned} \mathbf {H}\_i = \frac{\partial \pi \left( {}^{{\scriptscriptstyle {C}}\_s^{k'}}\mathbf {p}\_i \right) }{\partial {}^{{\scriptscriptstyle {C}}\_s^{k'}}\mathbf {p}\_i} \frac{\partial {}^{{\scriptscriptstyle {C}}\_s^{k'}}\mathbf {p}\_i}{\partial {}^{{\scriptscriptstyle {C}}\_s^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_s^{k'}}} = -\frac{1}{\rho \_i\cos \theta \_i} \begin{bmatrix}1&0&-\tan \theta \_i\cos {\phi }\_i \\ 0&1&-\tan \theta \_i\sin {\phi }\_i\end{bmatrix} {}^{{\scriptscriptstyle {C}}\_s^{k'}}\_{{\scriptscriptstyle {C}}\_s^k}\mathbf {R} \end{aligned}$$] (13) which leads to the following information matrix: [$$\begin{aligned} \frac{1}{\sigma \_s^2}\mathbf {H}^{{\scriptscriptstyle {T}}}\_i\mathbf {H}\_i = \frac{1}{\sigma \_s^2\rho ^2\_i \cos ^{2}{\theta \_i}} {}^{{\scriptscriptstyle {C}}\_s^{k'}}\_{{\scriptscriptstyle {C}}\_s^k}\mathbf {R}^{{\scriptscriptstyle {T}}} \begin{bmatrix} 1&0&-\tan {\theta \_i}\cos {\phi \_i} \\ 0&1&-\tan {\theta \_i}\sin {\phi \_i} \\ -\tan {\theta \_i}\cos {\phi \_i}&-\tan {\theta \_i}\sin {\phi \_i}&\tan ^{2}{\theta \_i} \end{bmatrix} {}^{{\scriptscriptstyle {C}}\_s^{k'}}\_{{\scriptscriptstyle {C}}\_s^k}\mathbf {R} \end{aligned}$$] (14) By employing the assumptions about the features’ distribution in (6), and substituting (14) into (4), yields: [] (15) Note that [$$\mathbf {H}\_i$$] in (13), and hence [] in (15), is expressed with respect to the position state, [$${}^{{\scriptscriptstyle {C}}\_s^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_s^{k'}}$$], of the camera s [see (13)]. Therefore, and since we chose the system’s state to comprise the downward-camera’s position, [$${}^{{\scriptscriptstyle {C}}\_d^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_d^{k'}}$$], the expected information gain from the corresponding feature observations is obtained by directly setting [$$s=d$$] in (15), i.e., [] (16) On the other hand, the forward-camera’s measurement Jacobian also depends on the extrinsics of the two cameras, i.e., [$$\begin{aligned} \mathbf {H}\_j = \frac{\partial \pi \left( {}^{{\scriptscriptstyle {C}}\_f^{k'}}\mathbf {p}\_i \right) }{\partial {}^{{\scriptscriptstyle {C}}\_f^{k'}}\mathbf {p}\_i} \frac{\partial {}^{{\scriptscriptstyle {C}}\_f^{k'}}\mathbf {p}\_i}{\partial {}^{{\scriptscriptstyle {C}}\_f^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_f^{k'}}} \frac{\partial {}^{{\scriptscriptstyle {C}}\_f^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_f^{k'}}}{\partial {}^{{\scriptscriptstyle {C}}\_d^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_d^{k'}}}, \ \ \ \ \text {where} \ \ \ \frac{\partial {}^{{\scriptscriptstyle {C}}\_f^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_f^{k'}}}{\partial {}^{{\scriptscriptstyle {C}}\_d^{k}}\mathbf {p}\_{{\scriptscriptstyle {C}}\_d^{k'}}} = {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R} \end{aligned}$$] (17) results from the geometric relationship between the two cameras across time steps k and [$$k'$$]. By comparing (17) to (13), the forward-camera’s Jacobian is obtained by first setting [$$s=f$$] in (13), and then multiplying it, from the right, with the extrinsic-calibration rotation matrix [$${}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R}$$]. Consequently, the expected information gain from the forward camera becomes: [] (18) Lastly, employing the geometric relationship [$${}^{{\scriptscriptstyle {C}}\_f^{k'}}\_{{\scriptscriptstyle {C}}\_f^{k}}\mathbf {R} {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R} = {}^{{\scriptscriptstyle {C}}\_f}\_{{\scriptscriptstyle {C}}\_d}\mathbf {R} {}^{{\scriptscriptstyle {C}}\_d^{k'}}\_{{\scriptscriptstyle {C}}\_d^{k}}\mathbf {R}$$] in (18) results in the expression for [] shown in (7). References 1. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, New York (2004) 2. Davison, A.J.: Active search for real-time vision. In: Proceedings of the IEEE International Conference on Computer Vision, Beijing, China, pp. 66–73, 17–21 October 2005 3. Do, T., Carrillo-Arce, L.C., Roumeliotis, S.I.: Autonomous flights through image-defined paths. In: Proceedings of the International Symposium of Robotics Research, Sestri Levante, Italy, 12–15 September 2015 4. Engel, J., Sturm, J., Cremers, D.: Camera-based navigation of a low-cost quadrotor. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, pp. 2815–2821, 7–12 October 2012 5. Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, pp. 15–22, 31 May–5 June 2014 6. Grabe, V., Bülthoff, H.H., Scaramuzza, D., Giordano, P.R.: Nonlinear ego-motion estimation from optical flow for online control of a quadrotor UAV. Int. J. Robot. Res. 34(8), 1114–1135 (2015)CrossRef 7. Heng, L., Lee, G.H., Pollefeys, M.: Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle. Autonomous Robots 39(3), 259–277 (2015)CrossRef 8. Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, pp. 225–234, 13–16 November 2007 9. Kottas, D.G., DuToit, R.C., Ahmed, A., Guo, C.X., Georgiou, G., Li, R., Roumeliotis, S.I.: A resource-aware vision-aided inertial navigation system for wearable and portable computers. In: Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, pp. 6336–6343, 31 May– 5 June 2014 10. Loianno, G., Mulgaonkar, Y., Brunner, C., Ahuja, D., Ramanandan, A., Chari, M., Diaz, S., Kumar, V.: Smartphones power flying robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 1256–1263, 28 September– 2 October 2015 11. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the International Joint Conference on Artificial Intelligence, Vancouver, British Columbia, pp. 674–679, 24–28 August 1981 12. Pless, R.: Using many cameras as one. In: Proceeding of the IEEE International Conference on Computer Vision and Pattern Recognition, Madison, WI, pp. 11–18, 16–22 June 2003 13. Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006). doi:10.​1007/​11744023\_​34 CrossRef 14. Schauwecker, K., Zell, A.: On-board dual-stereo-vision for the navigation of an autonomous MAV. J. Intell. Robot. Syst. 74(1–2), 1–16 (2014)CrossRef 15. Shen, S., Mulgaonkar, Y., Michael, N., Kumar, V.: Vision-based state estimation and trajectory control towards high-speed flight with a quadrotor. In: Proceedings of the Robotics: Science and Systems, Berlin, Germany, 24–28 June 2013 16. Weiss, S., Achtelik, M.W., Lynen, S., Achtelik, M.C., Kneip, L., Chli, M., Siegwart, R.: Monocular vision for long-term micro aerial vehicle state estimation: a compendium. J. Field Robot. 30(5), 803–831 (2013)CrossRef 17. Wu, K.J., Ahmed, A., Georgiou, G., Roumeliotis, S.I.: A square root inverse filter for efficient vision-aided inertial navigation on mobile devices. In: Proceedings of Robotics: Science and Systems, Rome, Italy, 13–17 July 2015 18. Zhang, G., Vela, P.A.: Good features to track for visual SLAM. In: Proceedings of the IEEE InternationaL Conference on Computer Vision and Pattern Recognition, Boston, MA, pp. 1373–1382, 7–12 June 2015 Footnotes 1 Although the two cameras’ fov have a small overlap, we do not match features between them as the different camera characteristics make such process unreliable.   2 Note that although the ensuing presentation focuses on the specific (forward and downward) configuration of the cameras onboard the Bebop quadrotor used in our experiments, our approach is applicable to any dual-camera system with arbitrary geometric configuration.   3 Without loss of generality, we choose the quadrotor’s frame of reference to be the one of the downward camera.   4 Through experimentation, [9] has been shown to offer a very efficient and accurate metric for assessing the expected information gain from each feature.   5 MSCKF features are marginalized by the SR-ISWF for performing visual-inertial odometry without including their estimates in the filter’s state; see [17] for details.   6 We do not evaluate the RMSE for the case of only downward-pointing camera since the quadrotor’s CPU cannot perform image processing at the high frame rates (40 Hz) required for tracking features at high speeds (6 m/s).   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_48 Catching a Flying Ball with a Vision-Based Quadrotor Kunyue Su¹   and Shaojie Shen¹ (1) The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong     Kunyue Su Email: ksu@connect.ust.hk Abstract We present a method allowing a quadrotor equipped with only onboard cameras and an IMU to catch a flying ball. Our system runs without any external infrastructure and with reasonable computational complexity. Central to our approach is an online monocular vision-based ball trajectory estimator that recovers and predicts the 3-D motion of a flying ball using only noisy 2-D observations. Our method eliminates the need for direct range sensing via stereo correspondences, making it robust against noisy or erroneous measurements. Our system is made by a simple 2-D visual ball tracker, a UKF-based state estimator that fuses optical flow and inertial data, and a nonlinear tracking controller. We perform extensive analysis on system performance by studying both the system dynamics and ball trajectory estimation accuracy. Through online experiments, we show the first mid-air interception of a flying ball with an aerial robot using only onboard sensors. Keywords Ball catchingQuadrotorVision-based state estimation The authors would like to thank the scholarship and equipment support from DJI. 1 Introduction We address the challenging and interesting topic of using an autonomous quadrotor to perform mid-air interception of a flying ball using only onboard sensors. We consider the scenario where no external motion capture systems are available, and the quadrotor has to use only its onboard cameras and IMU to observe and estimate the 3-D trajectory of a flying ball. We also require that the quadrotor is localized using its onboard visual-inertial sensor suite and performs feedback control in reaction to the flying ball that is observed in real-time. This is a challenging, dexterous task in which all actions happen in fractions of a second. It requires instantaneous reaction to the flying ball based on only a few noisy visual observations. The quadrotor is also required to maintain high-precision self-localization under various flight conditions and execute very aggressive commands. Automated ball catching/juggling or other ball sports are popular research topics in robotics. Using ground platforms and with the help of vision-based ball tracking algorithms, successful systems have been developed using a fixed robotic arm [1] or a humanoid robot [2]. However, ground-based vision systems are typically heavy and often require careful calibration of stereo cameras for distance measurement. Such a high-quality multi-camera setup is usually not available on aerial robots that have tight size and weight constraints. Moreover, ground platforms enjoy static stability, which greatly simplifies both estimation and control. When using agile flying robots as the executing platform, most of the prior works rely on external motion capture systems for vehicle and ball state feedback. [3] presents a model predictive controller for fast adaptation to ball and quadrotor dynamics, while a complete system, with a focus on trajectory, is proposed for aerial ball juggling [4]. Similar results are also reported in [5, 6]. Towards vision-based approaches using flying robots, [7] made the first attempt at returning a ping-pong ball using an AR.Drone. However, its approach relies on prior knowledge of the ball size to compute the 3-D ball position. This is prone to large errors due to noise in 2-D tracking. [] Fig. 1. A diagram of the overall system architecture. Given the fact that the gravity is known and the ball undergoes free-fall motion, we propose a novel optimization-based method to estimate and predict the 3-D ball trajectory using only noisy 2-D visual observations from a single upward-facing camera (Sect. 2). Our method incorporates all visual observations to continuously refine the estimated ball trajectory that is represented by its initial position and velocity. Based on this estimation, we can predict the short-horizon ball motion and command the quadrotor to the estimated landing area using a standard feedback controller. We estimate the position, velocity, and orientation of the quadrotor using a UKF-based loosely coupled visual-inertial-fusion module [8], where visual measurements are obtained from an optical flow-based velocity estimator [9] that utilizes measurements from a downward-facing camera and a sonar. In the current system, 2-D ball tracking is done using a simple engineering method by outfitting the ball with LEDs, which leads to simple ball observation by intensity thresholding. The overall system diagram is shown in Fig. 1 and the experimental platform is exhibited in Fig. 2(a). We identify our contribution as threefold: - We propose a monocular vision-based ball trajectory estimator that recovers and predicts the 3-D motion of a flying ball using only noisy 2-D visual measurements. - We perform careful system engineering on state estimation and visual ball tracking to build a system that runs fast enough for mid-air ball interception. - We integrate everything into a complete system and show the world’s first autonomous ball catching with an aerial robot using only onboard visual sensors. The rest of the paper is organized as follows: Sect. 2 presents our optimization-based ball trajectory estimation method. Section 3 discusses other system modules, including 2-D visual ball tracking, UKF-based state estimation, feedback control, and other system implementation details. Section 4 presents experimental data to analyze platform dynamics and trajectory estimation accuracy, as well as overall system performance. Section 5 concludes the paper and discusses future research directions. 2 Monocular Vision-Based 3-D Ball Trajectory Estimation We utilize two physical assumptions to estimate the 3-D ball trajectory using only 2-D observations from a single camera. First, we assume the magnitude of the earth’s gravity is known. Second, we assume that the ball undergoes free-fall motion and air resistance is ignored. We start by defining notations. We consider (w) as the world frame and (b) as the ball frame. (c) is the camera frame that is assumed to be aligned with the quadrotor body frame. [$$\mathbf {p}\_{c}^w$$] and [$$\mathbf {R}\_{c}^w$$] represent the position and orientation of the camera in the world frame. [$$\mathbf {p}\_{b\_{t\_0}}^w$$] and [$$\mathbf {v}\_{b\_{t\_0}}^w$$] represent the initial 3-D position and velocity of the ball in the world frame. We set [$$t\_0$$] as the time we receive the first visual measurement of the ball. [$$\lambda \_{b\_t}$$] is the depth of the ball in the camera frame in the observation that occurred at time t, and [$$\mathbf {u}\_{b\_t}$$] is the corresponding normalized image coordinates of the ball. [$$\mathbf {g}^w=[0,0,g]$$] is the gravity acceleration in the world frame that is assumed to be known. 2.1 Linear Estimation Given the gravity vector [$$\mathbf {g}^w$$], and assuming free-fall motion, the ball trajectory can be represented by initial position [$$\mathbf {p}\_{b\_{t\_0}}^w$$] and velocity [$$\mathbf {v}\_{b\_{t\_0}}^w$$] of the ball. Note that once we obtained the ball’s initial conditions [$$\mathbf {p}\_{b\_{t\_0}}^w$$] and [$$\mathbf {v}\_{b\_{t\_0}}^w$$], we are able to predict the ball motion for an arbitrary time horizon, and thus compute the expected interception point of the quadrotor at a given height. At time t, using the 2-D ball observation [$$\mathbf {u}\_{b\_t}$$] from a calibrated monocular camera, as well as the camera/quadrotor pose [$$\mathbf {p}^w\_{c\_t}$$] and [$$\mathbf {R}^w\_{c\_t}$$] given by our sensor fusion module (Sect. 3.2), we can express the ball’s initial conditions [$$\mathbf {p}\_{b\_{t\_0}}^w$$] and [$$\mathbf {v}\_{b\_{t\_0}}^w$$] in terms of visual observations and quadrotor poses: [$$\begin{aligned} \mathbf {p}^w\_{b\_{t\_0}} + \mathbf {v}^w\_{b\_{t\_0}} \varDelta t + \frac{1}{2}\mathbf {g}^w \varDelta t^2 = \mathbf {p}^w\_{c\_t} + \lambda \_{b\_t} \mathbf {R}^w\_{c\_t} \mathbf {u}\_{b\_t}, \end{aligned}$$] (1) where [$$\varDelta t=t-t\_0$$] is the time difference since the first visual observation of the ball. We can then stack all visual observations occurring at [$$t\_0,\cdots ,t\_N$$] into a linear system using (1): [$$\begin{aligned} \begin{bmatrix} \mathbf {I}\_{3\times 3}&\mathbf {I}\_{3\times 3} \varDelta t\_0&-\mathbf {R}\_{c\_{t\_0}}^w \mathbf {u}\_{b\_{t\_0}}&\mathbf {0}\_{3\times 1}&\dots \\\mathbf {I}\_{3\times 3}&\mathbf {I}\_{3\times 3} \varDelta t\_1&\mathbf {0}\_{3\times 1}&-\mathbf {R}\_{c\_{t\_1}}^w \mathbf {u}\_{b{t\_1}}&\dots \\\vdots&\vdots&\vdots&\vdots&\ddots \\\end{bmatrix} \begin{bmatrix} \mathbf {p}\_{b\_{t\_0}}^w \\ \mathbf {v}\_{b\_{t\_0}}^w \\ \lambda \_{b\_{t\_0}} \\ \lambda \_{b\_{t\_1}} \\ \vdots \\ \lambda \_{b\_{t\_N}} \\ \end{bmatrix} = \begin{bmatrix} \mathbf {p}\_{c\_{t\_0}}^w - \frac{1}{2}\mathbf {g}^w \varDelta t\_0^2 \\ \mathbf {p}\_{c\_{t\_1}}^w - \frac{1}{2}\mathbf {g}^w \varDelta t\_1^2 \\ \vdots \\ \mathbf {p}\_{c\_{t\_N}}^w - \frac{1}{2}\mathbf {g}^w \varDelta t\_N^2 \end{bmatrix}, \end{aligned}$$] (2) where [$$\varDelta t\_i=t\_i-t\_0$$]. We can then solve for the initial position and velocity of the ball, as well as the depth of the ball in all visual observations, in a linear and non-iterative manner. 2.2 Nonlinear Optimization The linear ball trajectory estimation presented in Sect. 2.1 does not take the noise model of 2-D visual tracking into account, which may yield suboptimal estimation results. In this section, we propose to employ nonlinear optimization to refine the ball’s initial conditions [$$\mathbf {p}\_{b\_{t\_0}}^w$$] and [$$\mathbf {v}\_{b\_{t\_0}}^w$$] by minimizing the reprojection error of visual observations. We can improve computational efficiency by avoiding solving the depth of the ball. To this end, we write the nonlinear visual measurement residual [$$r\_t$$] at time t as [$$\begin{aligned} f\_t&= \begin{bmatrix} x\_{f\_t} \\ y\_{f\_t} \\ z\_{f\_t} \end{bmatrix} = {\mathbf {R}\_{c\_t}^w}^T \left( \mathbf {p}\_{b\_{t\_0}}^w + \mathbf {v}\_{b\_{t\_0}}^w \varDelta t + \frac{1}{2}\mathbf {g}^w \varDelta t^2 - \mathbf {p}\_{c\_t}^w \right) , \\ \nonumber r\_t&= \begin{bmatrix} \frac{x\_{f\_t}}{z\_{f\_t}} - u\_{x\_t} \\ \frac{y\_{f\_t}}{z\_{f\_t}} - u\_{y\_t} \end{bmatrix} \end{aligned}$$] (3) where [$$u\_{x\_t}$$] and [$$u\_{y\_t}$$] are the two components of [$$\mathbf {u}\_{b\_t}$$]. We can then write down the batch nonlinear least-square problem for all [$$N+1$$] visual observations as: [$$\begin{aligned} \min \_{\hat{\mathbf {p}}\_{b\_{t\_0}}^w, \hat{\mathbf {v}}\_{b\_{t\_0}}^w} \sum \_{i = 0}^{N} \left\| r\_{t\_i} \right\| ^2\_{\mathbf {P}\_i}, \end{aligned}$$] (4) where [$$||{.}||\_{\mathbf {P}\_i}$$] is the Mahalanobis distance subject to the covariance matrix [$$\mathbf {P}\_i$$] for the [$$i^{th}$$] visual observation that occurred at time [$$t\_i$$]. We can then linearize (4) around the current estimates of [$$( \hat{\mathbf {p}}\_{b\_{t\_0}}^w, \hat{\mathbf {v}}\_{b\_{t\_0}}^w )$$]: [$$\begin{aligned} \min \_{\hat{\mathbf {p}}\_{b\_{t\_0}}^w, \hat{\mathbf {v}}\_{b\_{t\_0}}^w} \sum \_{i = 0}^{N} \left\| \hat{r}\_{t\_i} + \mathbf {J}\_{t\_i} \delta \mathbf {y} \right\| ^2\_{\mathbf {P}\_i}, \end{aligned}$$] (5) where [$$\hat{r}\_{t\_i}$$] is the residual for the [$$i^{th}$$] measurement computed using estimates [$$( \hat{\mathbf {p}}\_{b\_{t\_0}}^w, \hat{\mathbf {v}}\_{b\_{t\_0}}^w )$$], [$$\delta \mathbf {y} = [ \delta \mathbf {p}\_{b\_{t\_0}}^w, \delta \mathbf {v}\_{b\_{t\_0}}^w ]^T$$] is the small error term for the ball parameters, and [$$\mathbf {J}\_{t\_i}$$] is the Jacobian computed with respect to [$$\delta \mathbf {y}$$] around the current estimates[$$( \hat{\mathbf {p}}\_{b\_{t\_0}}^w, \hat{\mathbf {v}}\_{b\_{t\_0}}^w )$$], which can be written as: [$$\begin{aligned} \mathbf {J}\_{t\_i} = \begin{bmatrix} \frac{1}{z\_{f\_t}}&0&-\frac{x\_{f\_t}}{z\_{f\_t}^2} \\ 0&\frac{1}{z\_{f\_t}}&-\frac{y\_{f\_t}}{z\_{f\_t}^2} \end{bmatrix} \begin{bmatrix} {\mathbf {R}\_{c\_t}^w}^T&{\mathbf {R}\_{c\_t}^w}^T\varDelta t\\ \end{bmatrix} \end{aligned}$$] (6) (5) can be written into the form of a [$$6 \times 6$$] linear system [$$\mathbf {A} \delta \mathbf {y} = \mathbf {b}$$]. We solve it with the Huber norm [10] for better robustness against tracking errors and outliers. The linearization procedure runs in an iterative manner until convergence, after which we obtain good estimates of ball parameters [$$(\hat{\mathbf {p}}\_{b\_{t\_0}}^w, \hat{\mathbf {v}}\_{b\_{t\_0}}^w)$$] for ball trajectory prediction and physical ball catching. [] Fig. 2. The experimental quadrotor shown in (a). We outfit the quadrotor with two cameras (one upward, one downward) for ball tracking and visual positioning, respectively. The container for ball catching is removed for easy visualization of electronic components. Onboard processing includes an Odroid XU4 computer. We further process the visual data using a ground station that is tethered to the quadrotor using Ethernet cables. A snapshot of the experiment shown in (b). In this experiment, the ball reaches approximately 3 m in height, and the quadrotor moves about 0.3 m to catch the ball. A video of our experiments are available at: http://​www.​ece.​ust.​hk/​~eeshaojie/​iser2016kunyue.​mp4 3 Implementation Details In this section, we describe both the hardware platform and the software architecture of the system. The software diagram is shown in Fig. 1. Many of the software modules were originally developed for autonomous aerial navigation in complex indoor and outdoor environments [8]. Here we briefly recap the key algorithms and present adaptations to our ball catching application. 3.1 Hardware Setup As shown in Fig. 2(a), our experimental quadrotor is based on the DJI F330 frame. The platform is equipped with a PixHawk AutoPilot for attitude stabilization and for streaming IMU data for sensor fusion. Two computers are used in our experiment. An Odroid XU4 installed onboard the quadrotor is responsible for data acquisition from multiple sensors. A ThinkPad T440s laptop is tethered to the quadrotor using Ethernet cables. The decision to tether to an additional ground PC is made to reduce the weight of the platform in order to achieve a high thrust-weight ratio for faster reaction to ball motions. In our system, the 2-D ball tracker, 3-D ball trajectory estimator, and optical-flow-based quadrotor velocity estimator run on the T440s laptop, while the UKF-based sensor fusion and position controller run onboard the quadrotor. Onboard sensors include an mvBlueFox MLC200wG camera with wide angle lens for ball tracking. Another MLC200wG camera is bundled with a sonar for optical flow-based state estimation. The total weight of the platform is [$$1.21\,\text {kg}$$], and the maximum thrust is [$$2.72\,\text {kg}$$]. This gives a 2.24 thrust-weight ratio to the quadrotor. Additionally, we outfit the ball with two bright LED lights to simplify the process of visual ball detection and tracking in the monocular video sequence. In this way, we can localize the ball in the image by simple intensity thresholding, which greatly reduces the latency in visual observations, and makes it possible to track a ball that flies at a speed of more than [$$6\,\text {m/s}$$]. However, this simple tracking method also introduces significant errors due to the size of the light blob in the image. Fortunately, our monocular-vision-based ball trajectory estimation is able to use a large number of visual observations to fit a good curve. Experimental results (Sect. 4) show that even with noisy ball observations, we can estimate the landing spot of the ball with the error less than [$$4\,\text {cm}$$]. 3.2 Quadrotor State Estimation Another critical component in our system is a fast and reliable estimator that tracks the position, velocity, and orientation of the quadrotor even under highly dynamical motions. Our state estimator consists of an optical flow-based quadrotor velocity estimator and a UKF-based loosely coupled visual-inertial-sensor fusion module [8]. The optical flow-based velocity estimator utilizes continuous homography, as introduced in Chap. 5 of [9] to estimate the frame-to-frame motion of the camera. The estimator starts with tracking of a fixed number of pixels that are evenly distributed within the image using KLT tracking [11]. Note that we do not extract features, but rather rely on tracking of pixels at fixed locations in each image. The features tracked are used for calculation of continuous homography with RANSAC for outlier rejection. The homography matrix can be decomposed into the up-to-scale linear velocity vector, the angular velocity vector, and the plane normal. The linear velocity is then scaled using sonar measurement on the assumption of the existence of a single floor plane. The velocity estimator runs at [$$20\,\text {Hz}$$] on the ground PC. The [$$20\,\text {Hz}$$] velocity estimate from the vision-only approach is too noisy and too slow for feedback control of the quadrotor. We employ a UKF sensor fusion framework with delayed measurement compensation to estimate the state of the quadrotor at [$$100\,\text {Hz}$$]. The UKF we used in this project is a direct adaption of the multi-sensor fusion framework proposed in our earlier work [8]. The system state is defined as: [$$\begin{aligned} \mathbf {x} = \begin{bmatrix} \mathbf {p}^w\_c,\mathbf {\dot{p}}^w\_c,\mathbf {q}^w\_c,\mathbf {a}\_{b} \end{bmatrix}{}^{\text {T}}, \end{aligned}$$] (7) where [$$\mathbf {q}^w\_c$$] is the quaternion representation of the quadrotor orientation, and [$$\mathbf {a}\_{b}$$] is the bias of the accelerometer measurement in the quadrotor body frame. We consider a classical IMU-based, as in [8], to drive the system state forward: [$$\begin{aligned} \mathbf {x}\_{t+1} = f(\mathbf {x}\_t,\,\mathbf {w}\_t,\,\mathbf {a}\_t,\,\mathbf {n}\_t), \end{aligned}$$] (8) where [$$\mathbf {w}\_t$$] and [$$\mathbf {a}\_t$$] are instantaneous angular velocities and linear accelerations measured by the IMU, and [$$\mathbf {n}\_t$$] is the additive Gaussian noise associated with the gyroscope, accelerometer, and accelerometer bias. The measurement model for the velocity estimator is linear with respect to the system state and can be written as: [$$\begin{aligned} \mathbf {z} = \mathbf {H} \mathbf {x} + \mathbf {n}, \end{aligned}$$] (9) where [$$\mathbf {H}$$] extracts the 3-D velocity in the state and [$$\mathbf {n}$$] is the additive Gaussian noise. The velocity measurement update can then be performed via a linear KF update. In our implementation, the velocity estimator runs on the ground PC, and the UKF-based sensor fusion module runs onboard the quadrotor. 3.3 Feedback Control After the first observation of the ball in the image plane, we use the estimated ball parameters [$$(\hat{\mathbf {p}}\_{b\_{t\_0}}^w, \hat{\mathbf {v}}\_{b\_{t\_0}}^w)$$] to perform a forward prediction to a point where the height of the ball is the same as the height of the quadrotor. We use the condition number in the linear system (2) to decide whether the estimation of the ball trajectory has converged. In practice, we often find the ball trajectory converges after it reaches the highest point (Sect. 4.2). After the ball trajectory has converged and the landing point of the ball is well-estimated, we use a standard two-level controller architecture, similar to that in [12], to drive the quadrotor to the estimated landing point for ball interception. The low-level controller that runs on the PixHawk AutoPilot stabilizes the attitude of the quadrotor while the position controller runs at [$$100\,\text {Hz}$$] on the Odroid XU4 onboard computer. We only control the 3-D position and velocity of the quadrotor. The yaw angle is kept constant throughout the experiment. One may argue that a trajectory-generation-based method [13] yield better performance than just using a feedback controller. However, after considering the dynamics of the quadrotor (Sect. 4.1) and the convergence property of the ball trajectory estimator (Sect. 4.2), we found that the margin for the quadrotor to successfully catch a ball is only [$$20\,\text {cm}$$]. Beyond that, the platform will not have sufficient control authority to reach the desired position. For this reason, we concluded that the benefit of using more sophisticated trajectory generation would be minimal, and we stuck with a simple feedback controller for our application. 4 Experiment Results 4.1 Analysis on Quadrotor Dynamics In this experiment, we perform an analysis on the dynamical capability of the quadrotor. This is essential as we need to make a prior judgment as tp whether the quadrotor will be able to catch the ball before it takes any intercepting action. Based on the thrust-weight ratio of our quadrotor (Sect. 3.1), we know that the platform may achieve a maximum of 63[$$^\circ $$] in attitude while still maintaining its height. This corresponds to a maximum horizontal acceleration of [$$19\,\text {m/s}^2$$]. We also need to take the latency in data processing, motor response, and the platform’s inertial moment into account when determining the maximum travel distance in a given period of time. To this end, we design two experiments to show the overall response of our quadrotor to step inputs. Figure 3 shows the response of the quadrotor to step velocity changes. Figure 4 shows the response to a step position command of 0.3 m. It takes approximately 0.8 s for the quadrotor to reach the desired location. As shown in Fig. 5(b) in Sect. 4.2, for a throw of about 2 m in maximum altitude, the quadrotor has about 0.8 s to react to the ball after the ball trajectory is estimated. Therefore, we conclude that the maximum displacement between the estimated ball landing point and the current quadrotor position is 0.3 m, beyond which the quadrotor will not try to catch the ball. [] Fig. 3. Response of the quadrotor to step velocity changes. Corresponding position profile is also plotted. [] Fig. 4. Response of the quadrotor to step position command. It takes about 0.8 s to reach a step command of 0.3 m 4.2 Ball Trajectory Estimation Performance In this section, we evaluate the performance of the 3-D ball trajectory estimator using two sets of experiment. An optical motion capture system (mocap) is used to provide ground-truth references. We throw the ball in an indoor environment with a maximum height of about 3 m. In the first experiment, we create virtual 2-D ball observations by back projecting the ground truth position of the ball provided by the mocap to a virtual camera. The virtual camera is attached to the ground truth location of the quadrotor, which is also provided by the mocap. This experiment aims to evaluate the performance of the ball trajectory estimator using noiseless visual observations. The virtual ball observations can be computed as: [$$\begin{aligned} \begin{aligned}&\mathbf {p}\_b^c = \begin{bmatrix} x\_{\mathbf {p}\_b^c} \\ y\_{\mathbf {p}\_b^c} \\ z\_{\mathbf {p}\_b^c} \end{bmatrix} = {\mathbf {R}\_c^w}^T (\mathbf {p}\_b^w - \mathbf {p}\_c^w) \\&\mathbf {u} = \frac{1}{z\_{\mathbf {p}\_b^c}} \begin{bmatrix} x\_{\mathbf {p}\_b^c} \\ y\_{\mathbf {p}\_b^c} \end{bmatrix} \end{aligned}, \end{aligned}$$] (10) Given a sequence of [$$\mathbf {u}$$], [$$\mathbf {R}\_c^w$$], and [$$\mathbf {p}\_c^w$$], we use the algorithm proposed in Sect. 2 to estimate the ball trajectory parameters, which include the initial position [$$\hat{\mathbf {p}}\_{b\_{t\_0}}^w$$] and initial velocity [$$\hat{\mathbf {v}}\_{b\_{t\_0}}^w$$] We keep the height of the quadrotor unchanged at [$$z\_c$$] and compute the X and Y positions of the ball when it drops to the quadrotor height using forward prediction. We first solve for the flying time of the ball [$$\varDelta t$$] by solving the second-order linear equation: [$$\begin{aligned} z\_{\mathbf {p}\_b} + {z\_{\mathbf {v}\_b}}\varDelta t + \frac{1}{2} g \varDelta t^2 = z\_{c} \end{aligned}$$] (11) and then compute the intercepting point as: [$$\begin{aligned} \begin{bmatrix} x\_{c} \\ y\_{c} \end{bmatrix} = \begin{bmatrix} x\_{\mathbf {p}\_b} \\ y\_{\mathbf {p}\_b} \end{bmatrix} + \begin{bmatrix} {x\_{\mathbf {v}\_b}} \\ {y\_{\mathbf {v}\_b}} \end{bmatrix} \varDelta t , \end{aligned}$$] (12) where [$$\hat{\mathbf {p}}\_{b\_{t\_0}} = [x\_{\mathbf {p}\_b},y\_{\mathbf {p}\_b},z\_{\mathbf {p}\_b}]^T$$] and [$$\hat{\mathbf {v}}\_{b\_{t\_0}}^w = [x\_{\mathbf {v}\_b},y\_{\mathbf {v}\_b},z\_{\mathbf {v}\_b}]^T$$]. Figure 5(a) shows the convergence of the estimated landing point of the ball, while Fig. 6(a) shows the corresponding 3-D plots of the estimated ball trajectories at different times. Earlier estimates that use fewer observations are shown in darker color. We see a quick convergence of the estimated ball trajectory as more observations are collected. A multi-trial experiment showing the repeatability of our method is shown in Fig. 7(a). [] Fig. 5. The convergence processes of the ball trajectory. The dashed line represents the actual ball location when it drops to the height of the quadrotor. With sufficient observation, the estimated landing point of the ball converges to the ground truth. In (a) ball’s virtual measurements and quadrotor localization are based on the mocap. In (b) the measurements are from vision and quadrotor localization is based on the UKF-based sensor fusion. In the second experiment, an identical procedure is followed, but this time using only onboard sensor measurements. We integrate the LED-assisted visual ball detection, the optical flow-based velocity estimator, and the UKF-based sensor fusion and see very similar convergence patterns to the mocap case, as shown in Figs. 5(b) and 7(b). However, about twice as much time is needed for the estimated ball trajectory to converge. The repeatability test is shown in Fig. 7(b), in which we see that the error standard deviation is about twice those of the mocap scenario, (0.0249 m, 0.0188 m) in the X and Y directions, respectively. However, given this standard deviation, we are still able to perform successful ball catching. [] Fig. 6. The convergence processes shown in 3-D. Darker color lines indicate earlier estimates of the ball trajectory. Lighter color indicates later estimates, which clearly show convergence. Solid lines are estimated ball trajectories using visual measurements, while dashed lines represent the predicted ball motion. Darker (earlier) curves have more dashed portions due to fewer visual measurements. For (a), all measurements are based on the mocap. For (b), all measurements are from vision. [] Fig. 7. (a) shows error distribution of the estimated landing point of the ball in multiple trials using virtual ball observations from the mocap. The standard deviation is (0.0118 m, 0.0065 m) in the X and Y directions, respectively. (b) shows error in the estimated landing point of the ball using all visual measurements. The error standard deviation is (0.0249 m, 0.0188 m) in the X and Y directions, respectively. This is about twice as the one obtained using virtual ball observations from the mocap, but still within the margins of the ball container. [] Fig. 8. Visualization of the ball catching process sorted in time. The green line is the estimated ball trajectory, and red dots are ball observations back-projected onto the 3-D estimated. In (b), only a few visual observations are made, the ball trajectory is inaccurate and unstable. The quadrotor starts the catching process in (c). As more observations are made, as shown in (e)–(f), the estimated ball trajectory converges and the quadrotor is able to intercept the ball. 4.3 Vision-Based Ball Catching We finally integrate everything together and have the quadrotor catch the LED-lighted ball using only visual measurements. The quadrotor first hovers in place. When the ball is observed by the upward-facing camera and the ball trajectory converged, as determined by the condition number of the linear system in (2), the quadrotor is commanded to fly towards the estimated landing point of the ball. A visualization is shown in Fig. 8. In this experiment, the ball is thrown to about 3 m in height and lands about 0.3 m away from the initial hover position of the quadrotor. The quadrotor takes about 0.8 s to reach the interception point. The actual interception is less than 0.02 m from the center of the quadrotor. Currently, the success rate of ball catching using all onboard sensing is only about 30 %, and significant engineering optimization is still needed to increase the success rate. However, our experiments do show that vision-based ball catching using an aerial robot is feasible. A snapshot of this experiment is shown in Fig. 2(b). 5 Conclusion and Future Work In this paper, we present a system, with the focus on 3-D ball trajectory estimation, for vision-based catching of a flying ball using a quadrotor. We present a novel ball trajectory estimation method that recovers the flying ball’s kinematic states from noisy onboard sensors. Our system is integrated with 2-D visual ball trackers, optical-flow-based vehicle velocity estimators, a UKF-based visual-inertial fusion method, and online feedback. Extensive experimental results are presented to verify the proposed approach. To the best of our knowledge, we are the first to demonstrate mid-air interception of a flying ball with aerial robots using only onboard sensors. Notably, our current system runs using two cameras with a very simple optical flow-based localization method. However, with state-of-the-art monocular visual-inertial state estimators [14], we could further reduce the system to a monocular one with even higher state estimation accuracy. Other areas of research include the use of trajectory generation methods [13] for better reaction to the time-varying ball trajectory estimation, while still satisfying the platform dynamical constraints. We could also use dynamic vision sensors [15] for high-speed ball trajectory and vehicle state estimation. References 1. Frese, U., Baeuml, B., Haidacher, S., Schreiber, G., Schaefer, I., Haehnle, M., Hirzinger, G.: Off-the-shelf vision for a robotic ball catcher. In: Proceedings of the IEEE/RSJ International Conference on Intelligence Robots and Systems, vol. 3, pp. 1623–1629. IEEE (2001) 2. Birbach, O., Frese, U., Bäuml, B.: Realtime perception for catching a flying ball with a mobile humanoid. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 5955–5962. IEEE (2011) 3. Bouffard, P., Aswani, A., Tomlin, C.: Learning-based model predictive control on a quadrotor: onboard implementation and experimental results. In: Proceedings of the IEEE International Conference on Robotics and Automation (2012) 4. Müller, M., Lupashin, S., D’Andrea, R.: Quadrocopter ball juggling. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5113–5120. IEEE (2011) 5. Dong, W., Gu, G.Y., Ding, Y., Zhu, X., Ding, H.: Ball juggling with an under-actuated flying robot. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 68–73. IEEE (2015) 6. Ritz, R., Muller, M.W., Hehn, M., D’Andrea, R.: Cooperative quadrocopter ball throwing and catching. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4972–4978. IEEE (2012) 7. Silva, R., Melo, F.S., Veloso, M.: Towards table tennis with a quadrotor autonomous learning robot and onboard vision. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 649–655. IEEE (2015) 8. Shen, S., Mulgaonkar, Y., Michael, N., Kumar, V.: Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 4974–4981. IEEE (2014) 9. Ma, Y., Soatto, S., Kosecka, J., Sastry, S.S.: An Invitation to 3-D Vision: From Images to Geometric Models, vol. 26. Springer Science & Business Media, New York (2012)MATH 10. Huber, P.: Robust estimation of a location parameter. Ann. Math. Stat. 35(2), 73–101 (1964)MathSciNetCrossRefMATH 11. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the International Joint Conference on Artificial Intelligence, Vancouver, Canada, pp. 24–28, August 1981 12. Lee, T., Leoky, M., McClamroch, N.: Geometric tracking control of a quadrotor UAV on SE(3). In: Proceedings of the International Conference on Decision and Control, Atlanta, GA, pp. 5420–5425, December 2010 13. Mellinger, D., Kumar, V.: Minimum snap trajectory generation and control for quadrotors. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2520–2525. IEEE (2011) 14. Shen, S., Michael, N., Kumar, V.: Tightly-coupled monocular visual-inertial fusion for autonomous flight of rotorcraft MAVs. In: Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, May 2014 15. Mueggler, E., Baumli, N., Fontana, F., Scaramuzza, D.: Towards evasive maneuvers with quadrotors using dynamic vision sensors. In: European Conference on Mobile Robots, pp. 1–8. IEEE (2015) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_49 Experience-Based Models of Surface Proximal Aerial Robot Flight Performance in Wind John W. Yao¹  , Vishnu R. Desaraju¹   and Nathan Michael¹   (1) Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA     John W. Yao (Corresponding author) Email: johnyao@cmu.edu   Vishnu R. Desaraju Email: rajeswar@cmu.edu   Nathan Michael Email: nmichael@cmu.edu Abstract This work presents an experiment-driven aerodynamic disturbance modeling technique that leverages experiences from past flights to construct a predictive model of the exogenous forces acting on an aerial robot. Specifically, we consider operation in turbulent air stemming from the interaction between wind and nearby surfaces. To construct the disturbance model, we employ Locally Weighted Projection Regression and relate the aerial robot’s state and an experimentally learned freestream wind model to the disturbance forces estimated during flight through the environment. The approach is experimentally validated through a set of flight tests in an indoor environment with artificially generated turbulent airflow that illustrate the computation of this disturbance model, its generalizability across flow conditions, and its utility for disturbance-aware motion planning. 1 Introduction and Related Work In this work, we construct models of aerial robot flight performance near surface obstacles under the influence of wind. Smooth and accurate flight is difficult to achieve in these conditions due to the complex, nonlinear effects of aerodynamic forces and surface-induced turbulence on vehicle motion. Consequently, applications such as aerial robot infrastructure inspection that require precise positioning of sensors to collect data are extremely sensitive to local aerodynamic forces. To mitigate performance degradation due to harmful aerodynamic effects and take advantage of beneficial ones, planning strategies rely on access to high fidelity models of aerodynamic disturbances throughout the flight environment. Therefore, building accurate models of flight performance with respect to varying wind and local surface geometry is an important capability to improve the quality of aerial robot infrastructure inspection. Several approaches address the problem of planning and control with aerial robot plant models that vary with time and environmental conditions. One class of methods seeks to develop precise empirical models of aerodynamic phenomena [1–4] that can be accurately inverted to compute commands that achieve desired motions. Unfortunately, the weak observability [5] of many such models with available sensors in non-laboratory flight conditions renders them incapable of capturing changes in vehicle dynamics. Adaptive control strategies reactively update the controller’s internal plant model to match the vehicle’s actual motion and are widely used in aerial robots to mitigate unmodeled dynamics [6]. However, these methods are limited by the observation delay of inertial sensors and pose estimation [7]. Although sensors such as strain gages [8], accelerometer arrays [9], and pitot tubes [10] can reduce this delay and facilitate faster cancellation of disturbances, adaptive control strategies cannot pre-emptively mitigate them. Anticipating aerodynamic disturbances in planning and control requires generative models of the impact of environmental phenomena on vehicle motion [11], such as wind velocity on gliders [12] and ocean currents on underwater robots [13]. In many cases, the disturbance field is strongly correlated with the geometry of the vehicle’s immediate surroundings [14]. This fact is exploited in works that infer aerodynamic disturbance forces from local geometry [15]. In this work, we propose and experimentally validate an approach for modeling aerodynamic disturbance forces on an aerial robot flying near surface obstacles in wind. The proposed method leverages an aerial robot’s accumulated flight experiences to predict disturbance forces that it will experience at a given position, velocity, and for a given set of far field wind conditions. We summarize the technical approach in Sect. 2. Section 3 presents the results of experiments that validate the functionality and accuracy of the proposed horizontal aerodynamic disturbance force model under a variety of trajectories and artificially generated wind conditions in an indoor laboratory setting. Additionally, we demonstrate the utility of this model for aerial robot trajectory planning near surface obstacles in strong winds. Section 4 closes with a discussion and summary of the implications of the experimental results. 2 Technical Approach 2.1 Aerodynamic Disturbance Estimation In this work, we consider the translational dynamics of a quadrotor aerial robot with respect to a world reference frame, [$$\{W\}$$], and a body reference frame, [$$\{B\}$$], that is coincident with the center of mass such that the positive z-axis points in the nominal thrust direction parallel to the rotor axes. [$$\begin{aligned} \dot{\mathbf {p}}&= \mathbf {v} \end{aligned}$$] (1) [$$\begin{aligned} \dot{\mathbf {v}}&= \frac{1}{m} \left( T \mathbf {R} \mathbf {e}\_3 + \mathbf {f}\_t + \mathbf {f}\_a \right) - g \mathbf {e}\_3 \end{aligned}$$] (2) The vectors [$$\mathbf {p}, \mathbf {v} \in \{W\}$$] are the position and velocity of the vehicle center of mass, [$$\mathbf {R}$$] is a rotation matrix that takes vectors from [$$\{B\}$$] to [$$\{W\}$$], [$$\mathbf {e}\_3$$] is the third column of the [$$3 \times 3$$] identity matrix, and [$$\mathbf {f}\_t, \mathbf {f}\_a \in \{W\}$$] are the trims and aerodynamic disturbance forces acting on the vehicle. The constants m and g denote mass and gravitational acceleration, respectively, while the variable T represents the total thrust from all rotors along the body z-axis. We defer details of the lower-level attitude and motor dynamics to the relevant source literature [2]. We estimate the acceleration associated with the aerodynamic disturbance force, [$$\mathbf {a} = \frac{\mathbf {f}\_a}{m}$$], using a nonlinear observer that accounts for motor dynamics: [$$\begin{aligned} \dot{\hat{\mathbf {v}}}&= \frac{c\_T}{m} \hat{{\varvec{\varpi }}}^\text {T} \hat{{\varvec{\varpi }}} \mathbf {R} \mathbf {e}\_3 -g \mathbf {e}\_3 + \hat{\mathbf {a}} + \frac{\mathbf {f}\_t}{m} + \mathbf {L} \left( \mathbf {v} - \hat{\mathbf {v}} \right) \end{aligned}$$] (3) [$$\begin{aligned} \dot{\hat{{\varvec{\varpi }}}}&= k\_m \left( {\varvec{\varpi }}^d - \hat{{\varvec{\varpi }}} \right) \end{aligned}$$] (4) [$$\begin{aligned} \dot{\hat{\mathbf {a}}}&= \mathbf {\Gamma } \left( \mathbf {v} - \hat{\mathbf {v}} \right) \end{aligned}$$] (5) The vectors [$$\hat{\mathbf {v}}, \hat{{\varvec{\varpi }}}$$], and [$$\hat{\mathbf {a}}$$] are estimates of velocity, rotor speeds, and aerodynamic disturbance acceleration. Observations of velocity and attitude are provided by a separate state estimator, while the desired rotor speeds, [$${\varvec{\varpi }}^d$$], are provided by the controller. Trims, mass, thrust coefficient ([$$c\_T$$]), and motor constant ([$$k\_m$$]) are measured offline in separate experiments. [$$\mathbf {L}$$] and [$$\mathbf {\Gamma }$$] are diagonal matrices of observer gains. The aerodynamic disturbance acceleration observation, [$$\mathbf {a}$$], is obtained by low pass filtering [$$\hat{\mathbf {a}}$$] to neglect high frequency components that are beyond the bandwidth of the vehicle’s actuation. 2.2 Freestream Wind Descriptor We define the freestream wind descriptor, [$$\mathbf {a}\_\infty $$], as the aerodynamic disturbance that would be experienced by a vehicle at a location in a wind field in the absence of all surface obstacles. As this measure can account for the majority of bulk airflow effects, other features such as position and velocity can be used to model local aerodynamic disturbance variations from surface-induced effects. Additionally, this descriptor allows for comparison and interpolation between different bulk airflow conditions (see Sect. 3.2). To evaluate the freestream wind descriptor, the aerial robot traverses a closed path trajectory offset from the obstacles at a distance such that the experienced aerodynamic disturbances along the path are primarily due to the bulk airflow and not surface-induced effects. The disturbance accelerations observed along the path are introduced into a Gaussian process regression [16] to create a mapping from position to disturbance acceleration [$$\begin{aligned} \mathbf {a}\_\infty (\mathbf {p})&= \mathcal {GP}(\mathbf {p}) \end{aligned}$$] (6) To account for the dominant direction of wind flow, the Gaussian process distance measure is weighted according to the mean disturbance acceleration in each direction. Figure 1a illustrates the y-component of the freestream wind descriptor model computed from a circular trajectory in a unidirectional wind field, leading to comparable results computed over a trajectory that passes through interior locations (Fig. 1b). The similarity of these results indicates that the information obtained from the perimeter of the environment provides insight into the true aerodynamic disturbances in the interior, validating the proposed method of computing the freestream wind descriptor. [] Fig. 1. The freestream wind descriptor computed via (a) perimeter data and (b) interior data exhibit similar predictions for the y-component of disturbance acceleration over the x and y-components of position. 2.3 Aerodynamic Disturbance Prediction Locally Weighted Projection Regression (LWPR) [17] is employed to generate an aerodynamic disturbance prediction model. LWPR is a data-efficient regression technique that computes a low-dimensional model of the training data based on a combination of local linear models. Data efficiency is particularly important in an experience-based approach to enable continual learning and reuse of learned models. The disturbance model inputs are the aerial robot’s position, velocity, and the freestream wind descriptor. Separate LWPR models (7) are learned to predict each component of the disturbance acceleration observed via (3)–(5) [$$\begin{aligned} \mu \_{a,i}&= l\_i(\mathbf {p},\mathbf {v},\mathbf {a}\_\infty (\mathbf {p})), \quad i = \{x,y,z\} \end{aligned}$$] (7) In addition to the mean prediction, [$$\mu \_{a,i}$$], LWPR also provides a standard deviation uncertainty bound, [$$\sigma \_{a,i}$$]. For notational convenience, we stack the componentwise means and uncertainties into the vectors [$${\varvec{\mu }}\_\mathbf {a}$$] and [$${\varvec{\sigma }}\_\mathbf {a}$$]. While the formulation presented in this section is applicable to three dimensions, the experiments detailed in the sequel use two dimensional positions, velocities, and disturbance accelerations for ease of visualization. 3 Experiments In this section, we detail two experimental studies that seek to validate the aerodynamic prediction model developed above and illustrate its utility for disturbance-aware motion planning. 3.1 Experimental Setup and Data Collection To achieve repeatable wind conditions for model validation, we conducted experiments in an indoor [$$5\times 5\times 4$$] [$$\mathrm{m^3}$$] net-enclosed flight arena surrounded by movable high-power fans that generate turbulent airflow of up to 6.0 m/s. To disrupt the wind field and generate realistic aerodynamic interactions with surfaces, we introduce a transparent cube in the center of the flight environment. Flight tests are performed on a Lumenier Danaus quadrotor equipped with a Pixhawk autopilot and Odroid U3 computer. The vehicle fits within a [$$0.29\times 0.32\times 0.14$$] [$$\mathrm{m^3}$$] volume and has a hover mass of 0.734 kg. An unscented Kalman filter [18] fuses IMU data with motion capture pose observations to provide a smooth state estimate for control. A cascaded control scheme [19] consisting of a 100 Hz outer position controller and a 200 Hz inner attitude controller, both augmented with L1 adaptive control [20] for disturbance rejection, is implemented in C++ using ROS and run onboard the vehicle. Figure 2 illustrates the experimental platform and environment. To obtain a diverse set of wind conditions for building the aerodynamic disturbance model, different airflow patterns are synthesized by aligning a row of high-powered fans to point east, south, and southeast while varying their strength. Freestream wind models are computed for each wind condition using data from circular flights along the perimeter of the flight arena. Next, the aerial robot is commanded to fly trajectories around the cube obstacle at a constant height of 60 cm above the ground. During postprocessing, the position, velocity, disturbance acceleration, and freestream wind descriptor (computed from position using (6)) from the vehicle’s flight logs are used to train the aerodynamic disturbance model. [] Fig. 2. The quadrotor aerial robot navigates around a transparent cube obstacle in a wind field generated by high-power fans (orange circles), each of which is 70 cm in diameter and capable of producing turbulent airflow conditions up to 6.0 m/s. [] Fig. 3. The actual (blue), predicted (red), and 1 [$$\sigma $$] uncertainty envelope (gray) for the x-component of the aerodynamic disturbance acceleration over the test trajectory under northwesterly wind are depicted for the reduced (top) and full (bottom) models. 3.2 Model Prediction Performance The utility of the aerodynamic disturbance model is contingent on its ability to generate reasonable predictions in wind conditions that differ from those encountered in previous flights, as is often the case when a vehicle returns to inspect a structure after a long time interval. We assume that the bulk airflow is slowly varying or constant over the spatiotemporal domain of a typical inspection flight mission. Therefore, the freestream wind descriptor provides a way to express the “distance” between two bulk flow conditions. This component of the proposed aerodynamic disturbance model enables generalization across different wind conditions. To demonstrate this fact, we conducted twelve flight tests around the cube obstacle under westerly and northerly wind of varying strengths and one flight under northwesterly wind. Two LWPR aerodynamic disturbance models are trained on data from the first group of flights. The reduced model’s input feature vector consists of only position and velocity, while the full model’s input feature vector also includes the freestream wind descriptor. Both models are tested on flight data from the single flight under northwesterly wind. A comparison of both models’ predictions for [$$a\_x$$] against the ground truth (Fig. 3) shows that the full model is more accurate than the reduced model. Figure 4 supports this conclusion and shows that the full model’s prediction error norm (magnitude of [$$a\_x$$] and [$$a\_y$$] prediction errors) over the test data is lower than that of the reduced model. The oracle model uses the same LWPR hyperparameters as the full model, but is trained on the test data, and is included in Fig. 4 for comparison as the benchmark for the best achievable prediction accuracy under the same LWPR hyperparameters. [] Fig. 4. The cumulative distributions of disturbance acceleration prediction error norms for the full, reduced, and oracle models over a test flight under northwesterly wind. The superior performance of the full model with respect to the reduced model can be attributed to the fact that the latter only encodes position and velocity data to account for variations in bulk flow, resulting in degradation of prediction accuracy as training data is drawn from an increasing variety of wind conditions. The results in Figs. 3 and 4 highlight the improved prediction of disturbance accelerations in a previously unseen northwesterly wind using only flight data in westerly and northerly winds and demonstrates the benefit of using the freestream wind descriptor as a means of generalizing across different bulk flow conditions. 3.3 Disturbance-Aware Motion Planning To demonstrate the utility of the proposed aerodynamic disturbance prediction model, we consider a scenario derived from persistent inspection applications in which a quadrotor must navigate around a structure in the presence of an initially unknown wind field. However, as the inspection task requires multiple traversals of the region, we can leverage the disturbance model learned from past experience to improve the traversal route. An A\* search with an Euclidean distance heuristic computes optimal paths between user-selected start and goal positions. The resulting path is converted into a smooth trajectory for the quadrotor to follow by fitting a minimum-acceleration polynomial spline [21]. Although we restrict plans to the 2D horizontal plane for simplicity, the proposed approach generalizes to 3D. The plane is discretized with each grid cell connected to each of its eight neighbors, while each edge can be traversed at three different speeds. The cost, J, at each neighbor is defined as [$$\begin{aligned} J(\mathbf {p},\mathbf {v},\mathbf {a}\_\infty )&= \varDelta \mathbf {p} + h(\mathbf {p},\mathbf {v},\mathbf {a}\_\infty ) \end{aligned}$$] (8) [$$\begin{aligned} h(\mathbf {p},\mathbf {v},\mathbf {a}\_\infty )&= \Vert {\varvec{\mu }}\_\mathbf {a}\Vert + \Vert {\varvec{\sigma }}\_\mathbf {a}\Vert \end{aligned}$$] (9) where [$$\varDelta \mathbf {p}$$] is the distance between the neighboring cell and the current one. We define field cost (h) as the sum of the vector norms of the predicted acceleration mean and uncertainty, penalizing strong disturbances as well as volatile regions. However, as LWPR reports high uncertainty in regions with sparse flight data coverage, we only apply the field cost when the prediction model uncertainty is within empirically determined bounds on model variability. At the beginning of each planning trial, the quadrotor computes a path to a location on the other side of a cube obstacle with a strong wind field approximately orthogonal to the desired direction of travel, as depicted in Fig. 5. A series of five successive trials demonstrates the evolution of the computed trajectory as the accumulation of flight experience improves the accuracy of the field cost. [] Fig. 5. Snapshots of the quadrotor executing the disturbance-aware path computed in the final planning trial. Fans (orange circles) generate a northwesterly bulk flow. [] Fig. 6. The 2D paths (black) computed by the A\* planner on successive trials are shown relative to the central obstacle (gray) for the wind conditions depicted in Fig. 5. Start (green) and goal (red) locations correspond to the aerial robot’s location in Fig. 5a and d, respectively. The vector arrows (blue) indicate the speed and direction of the motion that incurs the lowest field cost (9) at each grid cell in the environment. In the first planning trial, the vehicle starts with a prior that encodes a maximally uncertain aerodynamic disturbance model and plans a trajectory that ignores the field cost. After each trial, the model is updated according to the recorded observations and the task is repeated using the growing set of accumulated experiences. Figure 6 shows the computed trajectories for all trials and the updated field costs for all trials excluding the first trial. In the second, third, and fourth trials (Figs. 6a–c), the trajectory alternates between the windward and leeward sides of the obstacle to reduce uncertainty in the aerodynamic model. In the final trial (Fig. 6d), the planner converges to a trajectory on the leeward side of the obstacle, shielded from direct wind from the fans while avoiding the turbulent regions behind the obstacle (Fig. 6c). Table 1 reports an overall trend of decreasing aerodynamic disturbance acceleration magnitudes and variability as more accurate field costs are used for planning. As a consequence of the planner explicitly optimizing for low disturbance trajectories, both mean and maximum tracking error are reduced over the sequence of runs. By utilizing the proposed experience-based aerodynamic disturbance model, the quadrotor is able to compute informed paths through the environment that minimize adverse aerodynamic effects of the wind field on flight performance. Table 1. Horizontal disturbance acceleration norm and 3D tracking error norm over successive A\* planner trajectories. +-------+-------------------------------------------+--------------------+----------------------------+-------+ | Trial | Horizontal disturbance norm [m/s[$$^2$$]] | | 3D Tracking error norm [m] | | +:======+:==========================================+:===================+:===========================+:======+ | | Mean | Standard deviation | Mean | Max | +-------+-------------------------------------------+--------------------+----------------------------+-------+ | 1 | 0.305 | 0.342 | 0.065 | 0.460 | +-------+-------------------------------------------+--------------------+----------------------------+-------+ | 2 | 0.276 | 0.276 | 0.069 | 0.427 | +-------+-------------------------------------------+--------------------+----------------------------+-------+ | 3 | 0.286 | 0.336 | 0.065 | 0.418 | +-------+-------------------------------------------+--------------------+----------------------------+-------+ | 4 | 0.221 | 0.233 | 0.061 | 0.508 | +-------+-------------------------------------------+--------------------+----------------------------+-------+ | 5 | 0.209 | 0.218 | 0.051 | 0.352 | +-------+-------------------------------------------+--------------------+----------------------------+-------+ 4 Conclusion In this work, we investigate the problem of modeling aerodynamic disturbances on aerial robots operating in windy environments near obstacles. We present a modeling technique that leverages past flight experiences to learn a mapping from position, velocity, and freestream wind conditions to disturbance acceleration. The experimental validation of these techniques suggest that the proposed methodology yields superior flight performance for surface-proximal operations with variable wind flow conditions. We demonstrate the viability of this approach in a laboratory setting with repeatable turbulent wind conditions and show that the approach leads to improvements in trajectory planning with the accumulation of flight experience over multiple missions, enabling informed motion planning to mitigate the effects of aerodynamic disturbances. References 1. Hoffmann, G.M., Huang, H., Waslander, S.L., Tomlin, C.J.: Quadrotor helicopter flight dynamics and control: Theory and experiment. In: Proceedings of the AIAA Guidance, Navigation, and Control Conferrnce, vol. 2, Hilton Head, USA, August 2007 2. Bangura, M., Mahony, R.: Nonlinear dynamic modeling for high performance control of a quadrotor. In: Proceedings of the Australian Conference on Robotics and Automation, Wellington, New Zealand, pp. 1–10, December 2012 3. Omari, S., Hua, M.D., Ducard, G., Hamel, T.: Nonlinear control of VTOL UAVs incorporating flapping dynamics. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, pp. 2419–2425, November 2013 4. Leishman, R., Macdonald, J., Beard, R., McLain, T.: Quadrotors and accelerometers: state estimation with an improved dynamic model. IEEE Control Syst. Mag. 34(1), 28–41 (2014)MathSciNetCrossRef 5. Abeywardena, D., Wang, Z., Dissanayake, G., Waslander, S., Kodagoda, S.: Model-aided state estimation for quadrotor micro air vehicles amidst wind disturbances. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, USA, pp. 4813–4818, September 2014 6. Grande, R.C., Chowdhary, G., How, J.P.: Experimental validation of bayesian nonparametric adaptive control using gaussian processes. J. Aero. Inf. Syst. 11(9), 565–578 (2014) 7. Mohamed, A., Abdulrahim, M., Watkins, S., Clothier, R.: Development and flight testing of a turbulence mitigation system for micro air vehicles. J. Field Robot. 33(5), 639–660 (2016)CrossRef 8. Ranganathan, B.N., Penskiy, I., Dean, W., Bergbreiter, S., Humbert, J.S.: Bio-inspired wind frame state sensing and estimation for mav applications. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, pp. 2729–2735, September 2015 9. Gremillion, G.M., Castano, L.M., Humbert, J.S.: Disturbance rejection with distributed acceleration and strain sensing. In: Proceedings of the AIAA Guidance, Navigation, and Control Conference, vol. 2, Kissimmee, USA, pp. 1623–1639, January 2015 10. Yeo, D.W., Sydney, N., Paley, D.A.: Onboard flow sensing for rotary-wing uav pitch control in wind. In: Proceedings of the AIAA Guidance, Navigation, and Control Conference, vol. 4, National Harbor, USA, pp. 2445–2455, January 2016 11. Desaraju, V., Michael, N.: Hierarchical adaptive planning in environments with uncertain, spatially-varying disturbance forces. In: Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, pp. 5171–5176, May 2014 12. Lawrance, N., Sukkarieh, S.: Path planning for autonomous soaring flight in dynamic wind fields. In: Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China, pp. 2499–2505, May 2011 13. Hollinger, G.A., Pereira, A.A., Binney, J., Somers, T., Sukhatme, G.S.: Learning uncertainty in ocean current predictions for safe and reliable navigation of underwater vehicles. J. Field Robot. 33(1), 47–66 (2016)CrossRef 14. Galway, D., Etele, J., Fusina, G.: Modeling of urban wind field effects on unmanned rotorcraft flight. J. Aircraft 48(5), 1613–1620 (2011)CrossRef 15. Bartholomew, J., Calway, A., Mayol-Cuevas, W.: Learning to predict obstacle aerodynamics from depth images for micro air vehicles. In: Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China, pp. 4967–4973, May 2014 16. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)MATH 17. Vijayakumar, S., D’Souza, A., Schaal, S.: Incremental online learning in high dimensions. Neural Comput. 17(12), 2602–2634 (2005)MathSciNetCrossRef 18. Julier, S.J., Uhlmann, J.K.: A new extension of the Kalman filter to nonlinear systems. In: Proceedings of the SPIE, vol. 3068, Orlando, USA, pp. 182–193, July 1997 19. Mahony, R., Kumar, V., Corke, P.: Multirotor aerial vehicles: modeling, estimation, and control of quadrotor. IEEE Robot. Autom. Mag. 19(3), 20–32 (2012)CrossRef 20. Hovakimyan, N., Cao, C.: L1 Adaptive Control Theory: Guaranteed Robustness with Fast Adaptation, vol. 21. SIAM, Philadelphia (2010)CrossRefMATH 21. Richter, C., Bry, A., Roy, N.: Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments. In: Proceedings of the International Symposium of Robotics Research, Singapore, December 2013 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_50 “On-the-Spot Training” for Terrain Classification in Autonomous Air-Ground Collaborative Teams Jeffrey Delmerico¹  , Alessandro Giusti²  , Elias Mueggler¹  , Luca Maria Gambardella²   and Davide Scaramuzza¹   (1) Robotics and Perception Group, University of Zurich, Zurich, Switzerland (2) Dalle Molle Institute for Artificial Intelligence (IDSIA), USI-SUPSI, Manno, Switzerland     Jeffrey Delmerico (Corresponding author) Email: jeffdelmerico@ifi.uzh.ch   Alessandro Giusti Email: alessanndrog@idsia.ch   Elias Mueggler Email: mueggler@ifi.uzh.ch   Luca Maria Gambardella Email: luca@idsia.ch   Davide Scaramuzza Email: sdavide@ifi.uzh.ch Abstract We consider the problem of performing rapid training of a terrain classifier in the context of a collaborative robotic search and rescue system. Our system uses a vision-based flying robot to guide a ground robot through unknown terrain to a goal location by building a map of terrain class and elevation. However, due to the unknown environments present in search and rescue scenarios, our system requires a terrain classifier that can be trained and deployed quickly, based on data collected on the spot. We investigate the relationship of training set size and complexity on training time and accuracy, for both feature-based and convolutional neural network classifiers in this scenario. Our goal is to minimize the deployment time of the classifier in our terrain mapping system within acceptable classification accuracy tolerances. So we are not concerned with training a classifier that generalizes well, only one that works well for this particular environment. We demonstrate that we can launch our aerial robot, gather data, train a classifier, and begin building a terrain map after only 60 s of flight. Multimedia Material: This paper is accompanied by a video illustrating the approach, available at: http://​rpg.​ifi.​uzh.​ch. Keywords Terrain classificationAir-ground collaborationSearch-and-rescueDeep learningConvolutional neural networks This research was funded by the Swiss National Science Foundation through the National Center of Competence in Research Robotics (NCCR). 1 Introduction In search-and-rescue scenarios, time is a critical factor in the success of the first responders [9], who must often put themselves in dangerous situations in order to provide aid. Unmanned systems have the possibility of providing new capabilities for them, as well as increasing their safety and decreasing the response time in delivering that aid. However, one challenge is that disaster scenarios (e.g. earthquakes) may alter the environment so that any prior maps are no longer valid, and even the types of terrain that are present may have changed. Consequently, robotic systems must be capable of gathering and using data on demand, without a reliance on a priori maps or pre-trained classifiers. [] Fig. 1. Overview of the search and rescue scenario: a terrain map generated from one of our field experiments (left) and the workflow of the collaborative team (right). The map includes the elevation information, terrain classification, and the path found for the ground robot. We investigate on-the-spot training of the classifier that provides the terrain classification in our map. In order to provide a fast unmanned response, we have developed a system where a flying robot guides a ground robot by mapping the terrain and finding a traversable path for it to follow. We assume no prior knowledge about the environment, so our system explores an unknown map and operates in the following stages, illustrated in Fig. 1. First, a human operator flies the micro aerial vehicle (MAV), searching for a victim. Camera imagery from the MAV’s down-looking camera is used to generate an initial classification of the terrain. Once the goal is found, the MAV engages in autonomous vision-guided flight while 3D reconstruction and ground robot planning are computed. The MAV incrementally maps the elevation and replans the ground robot trajectory until a feasible path the goal is completely explored. During the victim search stage of operation, we gather data to train a classifier based on the terrain classes that are present in the environment. Our goal is to minimize the overall response time: the combination of classifier training time, aerial mapping and exploration, and ground robot traversal. Therefore, the faster this classifier can be trained, the shorter our ground robot’s response time will be in delivering aid to the goal. This rapid training will of course come at the expense of quantitative performance, but in our search and rescue scenario, state of the art accuracy is not as important as response time: our terrain classifier does not need to generalize to many different conditions, since it’s deployed immediately and locally. 1.1 Related Work Other works have considered terrain classification from aerial imagery, but this is the first paper to address “on-the-spot” classifier training. An unmanned helicopter is used in [13] to gather multimodal aerial data, exhaustively exploring the area, to create an a priori terrain classification map that is then used to compute a ground path. High altitude, high resolution aerial images have also been used to perform terrain classification [1, 8]. These approaches utilize pre-trained classifiers that model a fixed set of classes that are known a priori. Performance of machine learning algorithms is strongly related to the quality and amount of available training data. This is especially true with image classification problems, which operate on high-dimensional inputs and are well-known to be difficult to handle in machine learning. Unfortunately, in our problem scenario we must handle severe limitations in the amount of available training data, which has to be acquired on the spot, and at the same time ensure very fast training time. The former problem has been studied more extensively than the latter, especially in the context of CNNs. It is well known that CNNs are powerful image classifiers as long as enough data is provided [7]. Limited training set sizes are especially tricky for CNNs because they have many free parameters and can represent complex functions of their inputs. As such, they are prone to overfitting [2] on training data, which is very likely when training data is scarce. Previous works have studied ad-hoc network layers [14, 15] that help to mitigate the effects of reduced training set sizes on accuracy. A different approach, which we adopt in this paper, is to augment training data by synthesizing a large amount of plausible training samples from a small set of actual samples. In addition to CNNs, we also implement a more traditional approach to image classification that uses Local Binary Pattern texture descriptors [11] as features for a standard statistical classifier. This approach, which was standard until a few years ago, has now been superseded by deep learning for most tasks involving challenging visual pattern recognition problems. However, the limited availability of training data and the strict requirements in term of training time make the feature-based approach competitive in our scenario. 1.2 Contributions Within the context of our search and rescue scenario, we consider the problem of rapid classifier training, and to the best of our knowledge, this is the first paper to analyze this problem with respect to robotic deployment. We make the following contributions: - We study the relationship of training data volume on classifier accuracy and training time for both feature-based and CNN-based classification approaches. We demonstrate that it is possible to achieve good results with small amounts of training data and time. - We propose a procedure for fast deployment of a terrain classification system in an unknown environment, including data collection, training, and classifier integration on a robotic system, all performed in-flight. 2 Technical Approach Our robot team consists of a lightweight MAV and an all-terrain ground vehicle that can climb moderate grades and traverse small obstacles. Our MAV [5] is equipped with a downward-looking camera, and flies in autonomous and vision-assisted manual flight modes using the visual odometry pipeline SVO [6]. The images from this camera are additionally used for terrain classification, and for elevation mapping using the keyframe-based monocular dense reconstruction pipeline REMODE [12]. We use both the estimated terrain class and elevation to determine traversable paths in the map, and estimate their costs in terms of response time. An accurate terrain classifier is a critical component of this system, in order to identify non-traversable regions (e.g. water), as well as to distinguish between terrain classes that would cause a slow response time (e.g. mud) and a fast one (e.g. concrete). 2.1 Terrain Mapping We consider a finite region of the ground surface to map, and discretize it into a 2D grid of uniformly sized cells. Our overall system seeks to populate the cells in this map with terrain class estimates and elevation, such that a feasible path can be found for the ground robot using that information. Within this system, the classification pipeline utilizes the estimated pose of the MAV to project its image stream into the map. Therefore, when we classify patches in an input image, we can associate the output with particular cells in the map. Additionally, since we have a precise estimate of the elevation of our MAV, we know the absolute scale of these patches, and we can train our classifier with data at only the scales we expect to observe during the mission, greatly simplifying the collection of training data. We accumulate classifications over time by averaging the class probabilities at each map cell over the number of observations there, with a neutral prior of [$$\frac{1}{n}$$], where n is the number of terrain classes. 2.2 Classification We compare two alternative approaches for classifying an image patch into a terrain class: feature-based, which computes Local Binary Pattern (LBP) texture descriptors [11] and then applies a logistic classifier; and CNN-based, which uses a Convolutional Neural Network for classification. The feature-based approach operates on a 50[$$\,\times \,$$]50 pixel image patch as input. On each patch, we compute an LBP descriptor, using 8 neighbors, uniform coding [11] and a given radius r, resulting in a 10-element feature vector. For a given input, we compute the descriptor for [$$r = {1,2,4,8,12}$$] pixels and concatenate the resulting feature vectors. We also add the variance of the image, which is not captured by LBP, as an additional feature. This results in a 51-dimensional feature vector. We then apply a standard classification pipeline based on feature scaling to zero mean and unary variance, and a logistic regression classifier with L2-norm penalty for regularization, and regularization strength parameter [$$= 1.0$$]. The CNN-based architecture implements interleaved convolutional (conv), max-pooling (MP), and Rectified Linear Units (ReLU) layers, followed by fully-connected (FC) layers for classification. This architecture has been shown to perform well in a wide range of pattern recognition tasks [4]. The input layer of the network receives the raw intensity values of a [$$50 \times 50$$] pixel patch. The network is then structured as follows: conv-3x3 with 20 output maps; ReLU; MP-2x2; conv-5x5 with 20 output maps; ReLU; MP-5x5; FC with 100 output neurons; FC with n output neurons, where n is the number of classes. The network is trained using stochastic gradient descent [3], with a base learning rate [$$\alpha =0.01$$], exponential policy for learning rate decay with [$$\gamma =0.998$$], and momentum [$$\mu =0.9$$]. 2.3 Data Collection Our target scenario would involve first responders arriving at a disaster site, deploying an MAV, selecting a few regions in the image stream to use as training data for each class, training the classifier in-flight, and then beginning to classify the terrain in the environment. To facilitate experiments that investigate the effect of training data volume on classifier performance, we captured several datasets of outdoor environments with multiple terrain classes. For each environment, a discretized map of size 10 m [$$\times $$] 10 m and cell size 0.1 m was constructed as described in Sect. 2.1, and the image stream from the MAV was projected into the map. We rectify the images and then crop patches from them that are centered on cells in the map. Using the estimated pose of the camera, each patch is selected as a 1.5 m [$$\times $$] 1.5 m region of the terrain when projected to the ground surface, and then resized to be [$$150\times 150$$] pixels. Note that in this patch, the actual cell represents the center [$$10\times 10$$] pixel region, but we also save the surrounding area for training. For each map cell, we crop a patch if the [$$150\times 150$$] pixel patch is fully contained in the image. [] Fig. 2. Example patches from each class in the canyon dataset showing the ambiguity of the classes (training set on the left, testing set on the right). The rows represent (top to bottom) rock, grass, and water. We flew our MAV over two environments, which we name driveway and canyon, and recorded approximately 3000 images each. We then generated a set of patches as described above, and stored all of the patches for each cell, yielding maps that contained up to 37 patches per cell, with an average across the two datasets of about 9 per cell. A few cells at the periphery of each map received 0 patches, and so we ignore these cells for training and testing. Example patches from the canyon dataset can be seen in Fig. 2, showing the challenge of distinguishing the three classes from each other. In order to generate a map labeled with ground truth, we overlayed and averaged patches from each cell to create a mosaic image, and then annotated this by hand for terrain class. Figure 3 shows the ground truth maps for the two datasets. Since all of our training data is manually labeled, we assume that all of the labels are correct (i.e., dataset quality is not an issue). This is a common assumption in nearly all machine learning applications to robotics, and handling noisy training datasets is still an active research topic in machine learning [10]. However, uncertainty in our MAV’s pose estimate, and strong distortion from our wide-angle lens, results in some patches being projected to incorrect cells in the map, causing some polluted training data. [] Fig. 3. Ground truth labels for our two datasets. Note that we only consider the major classes in our classification problem: {grass, concrete, pavement} for driveway and {grass, rock, water} for canyon. 2.4 Data Augmentation for Classifier Training We adopt a data augmentation algorithm that allows us to generate an arbitrary number of unique samples from a single image patch. We utilize this procedure to effectively boost the size of our training set without additional data collection and labeling. We sample a patch from a random cell in one of our datasets, then perform the procedure described below to generate a training patch, and repeat until we have reached the desired number of training samples for each class. Given an input ([$$150 \times 150$$] px) patch from a training image, we produce any number of ([$$50 \times 50$$] px) patches using the following steps: select a point P within a distance of 10 px from the center of the input patch; sample a random rotation [$$\alpha \in [0, 2\pi )$$]; sample a random scaling factor [$$k \in [0.9, 1.1]$$]; define a square patch centered on P, rotated by [$$\alpha $$], with an edge size of [$$k \cdot 50$$] px; warp this square to a ([$$50 \times 50$$] px) image patch; and finally subtract the mean of the patch to normalize it. Fourteen examples of random augmentations from a single ([$$150 \times 150$$] px) input are shown in Fig. 4. [] Fig. 4. Example of data augmentation output. On the left is the input patch ([$$150\times 150$$] px). All of the smaller ([$$50\times 50$$] px) training patches on the right have been produced using the approach described in Sect. 2.4. 3 Experimental Results The focus of our classification method is rapid adaptation to previously unknown terrains—fast and specific training, instead of the ability to generalize. Therefore, our experiments concern the trade-off between the amount of training data, training time, and accuracy on unseen parts of the map. For the training time, however, we not only take the computation time into account, but also the time required for data acquisition on the spot. Since the critical factor in search-and-rescue missions is the overall response time [9], we perform a series of experiments to address the training time vs. accuracy trade-off in a variety of scenarios. 3.1 Classifier Accuracy and Training Time Experiments For each trial on each dataset, we randomly sample 250 cells per class to be used as potential training data; let [$$S\_\text {tr}$$] be the resulting set of cells. Among the remaining cells, we randomly sample 100 per class to evaluate our classifiers on ([$$S\_\text {te}$$]). The sampled cells have, on average, 11 patches each. For a given experiment, we consider the following parameters: - number of training cells per class (that will be randomly sampled from [$$S\_\text {tr}$$]). - whether to train classifiers on a single patch per training cell (tr-patch) or all patches that map to a given training cell (tr-cell). - how many random variations of each training patch to generate with data augmentation when building the training set for our classifiers. - whether to use the feature-based classifier or the CNN. [] Fig. 5. Effect of training data augmentation strategies on classification performance (left, center) and training time (right). We compare 4 approaches: without data augmentation (aug1, light) and with 10 random variations per patch (aug10, dark); considering a single training patch per training cell (patch-tr, blue), or considering all patches that backproject to a given training cell (cell-tr, green). We report the AUC of the classifiers computed on all testing patches (left) and after averaging all testing patches belonging to the same testing cell (center). [] Fig. 6. Patch-based AUC as a function of the number of training cells per class (x axis) for the driveway (left) and canyon (center) datasets; data are reported separately for the feature-based (red) and cnn-based (yellow) classifiers. Once we train the chosen classifier on the resulting training set, we evaluate it on all patches in [$$S\_\text {te}$$]. We compute three metrics: - The patch-based performance of the classifier; it is expressed as the Area under the ROC Curve (AUC) computed over all patches in [$$S\_\text {te}$$]. - The cell-based performance of the classifier; it is expressed as the Area under the ROC Curve (AUC) computed over all cells in [$$S\_\text {te}$$]. A given cell is classified by averaging the classification vectors obtained for all cells that belong to it. - The overall training time of the classifier (on a single Intel i7 core), including the time required to compute features for the training data. The AUC is the preferred metric for evaluating classifier performance because it is not affected by the prevalence of classes in the testing set, does not depend on a threshold, and also captures the quality of the probabilistic information that the classifier produces. Note that any dummy classifier (e.g. a random classifier, or a classifier that only returns the majority class) yields an AUC of 0.5. This is the baseline for all the AUC plots. We repeat each experiment five times, with different random selections for the subset of [$$S\_\text {tr}$$] that is used for training in each trial. Figures 5 and 6 report these results, with standard deviation over the 5 trials as error bars. 3.2 Training Pipeline Experiments In these experiments, we demonstrate our full training and classification system in the context of a search and rescue mission, and we deploy our quadrotor in the same two environments as our previous experiments. Our experimental design is detailed in Algorithm 1, and represents our proposed procedure for mission deployment of our system. We designed a software interface that allows the user to select a rectangular region of an image and label it as a terrain class. The selected region is projected to the surface map, and the cells that it overlaps are assigned the corresponding label. We deploy our flying robot, fly over a region in the map with a consistent terrain class, and then select that region in the image stream using our software. Meanwhile, patches from the image stream that are centered on these cells are projected to the surface map, cropped to a consistent size ([$$150 \times 150$$] px, as before), and associated with the labeled cells. After labeling a region for each terrain class, all of the patches that subsequently project to these labeled cells are collected together as the training set. In this way, we gather training patches for n terrain classes by simply selecting n regions of interest on the image stream. [] We follow this procedure for both the driveway and canyon datasets, and are able to perform our full training procedure (see Algorithm 1), from MAV launch to classification, in 60.44 s and 60.12 s, respectively (averaged over 5 trials each, computed on a single core of an Intel i7). These times each include approximately 8 s of training for the feature-based classifier on 1000 patches of each terrain class, randomly sampled from the training patches accumulated up to the start of training. We then proceed to survey the environment as we would in a mission scenario, and classify patches sampled from the image stream, while accumulating terrain class probability estimates in the map cells to which they project. Examples of the terrain class maps, generated using an “on-the-spot” trained classifier, are shown in Fig. 7, with ground truth labels in Fig. 3 for reference. [] Fig. 7. Snapshots of the terrain classification map using a classifier that was trained in-flight. At each time t since the classifier finished training, there is a camera image overlaid with the current classification in the map, and the terrain class map (where RGB colors represent classification probability) overlaid with the area in view of the camera (white dotted rectangle) (right). 4 Discussion The experiments in Sect. 3.1 demonstrate that when training data is limited, data augmentation is necessary to achieve acceptable performance from the CNN. Conversely, the feature-based classifier performs reasonably well even without data augmentation (Fig. 5). This is expected, since the CNN operates on raw image data and must be taught rotational invariance through large training sets and/or data augmentation, while the feature-based classifier uses features that are rotationally-invariant by design. We additionally found that classifier performance computed at the level of testing cells is higher than performance computed at the level of individual patches (Fig. 5 center vs left). In the former case, one averages multiple classifications obtained for different patches that map to the same cell, thus increasing the accuracy of the classification. Training time for the CNN depends mainly on the fixed number of training iterations, which operate on fixed-size batches of data randomly sampled from the training set, and is therefore mostly independent of the number of training samples (right plots in Figs. 5 and 6). The training time of the feature-based classifier, is instead dominated by the time required to compute LBP features on all of the training samples. Therefore, it is heavily affected by the size of the training set and the amount of data augmentation, which as noted above does not have a significant effect on performance. For both the feature-based classifier and the CNN, performance improves with the number of training cells but the canyon dataset is more challenging than driveway for both classifier types, due to ambiguous textures (see Fig. 2) However, despite the difference in their structure, the performance of the feature-based and the CNN-based classifiers are similar, in particular when relatively large amounts of training data are available. Note that with a small number of training cells, the performance of the resulting classifiers heavily depends on the choice of such training cells. If “bad” or non-representative training cells are chosen for training, the classifier will underperform. This is visible in Fig. 6 as the height of the error bars. Training with approximately 10 cells (on average, about 110 patches) per class gives a good compromise between training time and AUC for the feature-based classifier. Increasing the training set size to 100 cells per class slightly improves performance while keeping the training time manageable, with diminishing returns above that dataset size. With the current settings, the CNN-based classifier requires at least several minutes for full training regardless of the amount of training data and augmentation. This performance could be optimized by one or two orders of magnitude by using multiple CPU cores, adding GPU support, and optimizing the solver parameters for speed. However, at the moment we do not observe enough performance advantage to justify investment in these optimizations. Our full pipeline experiments show that we can begin classifying approximately 60 s after launching the MAV, so surveying the environment for terrain classification with an on-the-spot trained classifier is certainly feasible within the battery life of even a small MAV. The speed and ease with which we can label a large amount of training data is only possible because we collect it from a mobile robot that maintains an estimate of its position and orientation in space. By labeling a region of the environment, rather than an image, we are able to multiply the training set size by automatically labeling subsequent frames using that spatial reference. This allows us to collect thousands of training samples in a matter of seconds without manually labeling all of them. 5 Conclusions We have proposed and validated a system for “on-the-spot” classifier training for terrain mapping, in the context of a search-and-rescue air-ground robot team. When deployed in a disaster area, our MAV can gather training data, train a classifier, and begin terrain mapping in flight, within one minute of launch. During the development of this system, we thoroughly evaluated several classifier architectures and training approaches in terms of both quantitative performance and training time. This is the first work to demonstrate a system that can be trained and utilized immediately, in situations where response time is critical. Although our experiments utilized small, fully trained CNNs, we intend to explore whether large pre-trained networks may be a feasible alternative, since we may be able to reduce the CNN training time if we must only train the final layer using the “on-the-spot” data. Our scenario also offers the opportunity for online learning if the user labels new regions of the map once the classifier has been trained. We intend to explore whether higher accuracy can be achieved by utilizing user input to correct or disambiguate difficult classification areas. References 1. Azimi-Sadjadi, M.R., Ghaloum, S., Zoughi, R.: Terrain classification in SAR images using principal components analysis and neural networks. IEEE Trans. Geosci. Remote Sens. 31(2), 511–515 (1993)CrossRef 2. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer-Verlag New York Inc., Secaucus (2006)MATH 3. Bottou, L.: Stochastic gradient descent tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) NN: Tricks of the Trade, 2nd edn. LNCS, vol. 7700, pp. 421–436. Springer, Heidelberg (2012). doi:10.​1007/​978-3-642-35289-8\_​25 CrossRef 4. Ciregan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3642–3649. IEEE (2012) 5. Faessler, M., Fontana, F., Forster, C., Mueggler, E., Pizzoli, M., Scaramuzza, D.: Autonomous, vision-based flight and live dense 3D mapping with a quadrotor MAV. J. Field Rob., 1556–4967 (2015) 6. Forster, C., Pizzoli, M., Scaramuzza, D., SVO: fast semi-direct monocular visual odometry. In: IEEE International Conference on Robotics and Automation (ICRA) (2014) 7. LeCun, Y., Huang, F.J., Bottou, L.: Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2, p. II-97. IEEE (2004) 8. Montoya-Zegarra, J.A., Wegner, J.D., Ladicky, L., Schindler, K.: Semantic segmentation of aerial images in urban areas with class-specific higher-order cliques. ISPRS Ann. 1, 127–133 (2015) 9. Murphy, R.R.: Disaster Robotics. MIT Press, Cambridge (2014) 10. Natarajan, N., Dhillon, I.S., Ravikumar, P.K., Tewari, A.: Learning with noisy labels. In: Advances in Neural Information Processing Systems, pp. 1196–1204 (2013) 11. Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)CrossRefMATH 12. Pizzoli, M., Forster, C., Scaramuzza, D.: REMODE: probabilistic, monocular dense reconstruction in real time. In: IEEE International Conference on Robotics and Automation (ICRA) (2014) 13. Sofman, B., Bagnell, J.A., Stentz, A., Vandapel, N.: Terrain classification from aerial data to support ground vehicle navigation. Technical Report CMURI-TR-05-39, Carnegie Mellon University, January 2006 14. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetMATH 15. Zeiler, M.D., Fergus, R.: Stochastic pooling for regularization of deep convolutional neural networks. In: International Conference on Learning Representations (2013) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_51 Safe Navigation of Quadrotor Teams to Labeled Goals in Limited Workspaces Sarah Tang¹  , Justin Thomas¹   and Vijay Kumar¹   (1) GRASP Laboratory, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA     Sarah Tang (Corresponding author) Email: sytang@seas.upenn.edu   Justin Thomas Email: jut@seas.upenn.edu   Vijay Kumar Email: kumar@seas.upenn.edu Abstract In this work, we solve the labeled multi-robot planning problem. Most proposed algorithms to date have modeled robots as kinematic or kinodynamic agents in planar environments, making them impractical for real-world systems. Here, we present experiments to validate a centralized multi-robot planning and trajectory generation method that explicitly accounts for robots with higher-order dynamics. First, we demonstrate successful execution of solution trajectories. Next, we verify the robustness of the robots’ trajectory tracking to unmodeled external disturbances, in particular, the aerodynamic interactions between co-planar neighbors. Finally, we apply our algorithm to navigating quadrotors away from the downwash of their neighbors to improve safety in three-dimensional workspaces. Keywords Aerial roboticsTrajectory generationMulti-robot planning 1 Introduction Multi-robot teams have been used to complete many complex tasks, including warehouse management [1], package delivery [2], and surveillance [3]. These applications often require each agent to safely and quickly navigate to goal locations to complete tasks, where tasks are non-interchangable between robots. This is called the labeled multi-robot planning problem. Many approaches have been presented for solving this problem, including variations on traditional single-robot planners [4], graph-search [5], optimization-based [6], and rule-based [7]. However, algorithmic guarantees often assume robots are perfect kinematic or kinodynamic agents. In reality, robots often have higher-order dynamics that could potentially make motions planned by kinematic algorithms infeasible. This problem of generating feasible, optimal, collision-free trajectories for dynamic robots has mostly been neglected. As a result, experimental validation of multi-robot algorithms has been limited to simulation or teams of four to five ground robots that can be approximated as kinematic, such as the Pioneer [8], iRobot Create [9], Dr. Robot Jaguar Lite [10], or other similar custom platforms [11, 12]. These robots move at relatively slow velocities in planar environments, and so, results on these platforms do not necessarily indicate applicability to fast-moving vehicles with higher-order dynamics, such as quadrotors. [] Fig. 1. A multi-robot team of five quadrotors. A number of multi-robot planning algorithms have been successfully implemented on quadrotors. For example, Alonso-Mora et al. [13] propose methods for collision avoidance using Velocity Obstacles, with experimental validation using two robots. Omidshafiei et al. [14] propose a Decentralized Partially Observable Semi-Markov Decision Process framework for planning and include experiments using four quadrotors. Yet, even in these applications, quadrotors move relatively slowly and operate as planar systems at a common altitude. Similar efforts have been made in the unlabeled multi-robot planning domain, where robots can swap goals. Turpin et al. [3] demonstrate navigation of six small vehicles through an obstacle-filled space and Mohta et al. [15] demonstrate six quadrotors cooperatively completing an outdoor surveillance mission. However, these approaches are not suitable for applications like package delivery, where robots are non-interchangeable. This work aims to experimentally validate a centralized multi-robot planning and trajectory generation algorithm that explicitly accounts for robot dynamics. Figure 1 shows an image of our experimental testbed. We present results from three experiments. First, we execute solution trajectories for a team of four robots for problems with various levels of congestion in a limited workspace. We demonstrate that our algorithm successfully accounts for the fourth-order, underactuated, dynamics of the quadrotor and plans safe, yet fast, motions. Next, we test the robots’ robustness to aerodynamic interactions between co-planar neighbors. Finally, we apply our algorithm to quadrotor teams operating in the full three-dimensional workspace. Specifically, we show that our algorithm is able to improve trajectory tracking by navigating robots away from the downwash of their neighbors. 2 Technical Approach Consider a team of robots operating in an obstacle-free two-dimensional workspace. Each robot is contained in a disk of radius R and has [$$n^{th}$$]-order dynamics: [$$\begin{aligned} \mathbf {x}\_i^{(n)}(t)&= \mathbf {u}\_i(t) \end{aligned}$$] (1) [$$\mathbf {x}\_i$$], [$$\mathbf {u}\_i$$] are the position and input of robot i, respectively. Let [$$\mathbf {X}\_i$$] denote the state: [$$\begin{aligned} \mathbf {X}\_i = [\mathbf {x}\_i \ \ \dot{\mathbf {x}}\_i \ \ ... \ \ \mathbf {x}^{(n-1)}\_i]^T \end{aligned}$$] (2) We define the labeled multi-robot planning problem as follows: given N robots, indexed using [$$i \in [1, N]$$], with start positions [$$\mathbf {s}\_i \in \mathbb {R}^2$$] and goal positions [$$\mathbf {g}\_i \in \mathbb {R}^2$$], find trajectories [$$\gamma \_i(t): \mathbb {R} \rightarrow \mathbb {R}^2$$] that navigate all robots safely from start to goal. Our proposed algorithm solves this problem in two steps. In the motion planning step, we find a safe motion plan for each robot, [$$\mathcal {M}\_i = \{ \mathcal {T}, \mathcal {X}\_{i, des} \}$$]. [$$\mathcal {T} = \{t\_0, t\_1, ..., t\_m\}$$] is a set of times and [$$\mathbf {X}\_{des, i}^t = \{\mathbf {X}\_{des, i}^{t\_0}, \mathbf {X}\_{des, i}^{t\_1}, ..., \mathbf {X}\_{des, m}^{t\_1}\}$$] is the set of corresponding desired states. For each [$$\mathbf {X}\_{des, i}^{t\_j}$$], the desired position must be specified, however, higher derivatives can be unspecified. While all robots have unique sets of desired states, they share a common [$$\mathcal {T} = \{t\_0, t\_1, ...\}$$]. We find this motion plan using the OMP\_CHOP algorithm [16], summarized in Fig. 2. Each robot’s initial motion plan is a straight-line trajectory to its goal. The algorithm iteratively resolves inter-robot collisions by constructing Circular HOlding Patterns (CHOPs). Each CHOP consists of a series of waypoints navigating the affected robots in a collision-free manner to their goals. The final solution, Fig. 2c, allows robots to take direct trajectories to their goals when possible and navigates them into CHOPs through congested areas. Algorithm details can be found in [16]. [] Fig. 2. Illustration of motion planning algorithm. Robots must navigate from start positions, circles, to corresponding goals, stars of the same color. (b) shows a CHOP between red and navy robots, with waypoints as black squares. In (c), a CHOP is constructed between blue and green robots, causing a collision. (d) refines blue, green, and yellow robots’ plans to one CHOP. In the trajectory generation step, we transform motion plans, [$$\mathcal {M}\_i$$], to trajectories, [$$\gamma \_i(t)$$]. Each [$$\gamma \_i(t)$$] is a piecewise polynomial, [$$\gamma \_i(t) = [x\_i(t) \ \ y\_i(t)]^T$$], where: [$$\begin{aligned} x\_i(t)&= {\left\{ \begin{array}{ll} \mathop {\sum }\nolimits \_{k = 0}^{2n-1} c^i\_{k, 1, x}t^k, &{} t\_0 \le t< t\_1 \\ \mathop {\sum }\nolimits \_{k = 0}^{2n-1} c^i\_{k, 2, x}t^k, &{} t\_1 \le t < t\_2 \\ ... \\ \mathop {\sum }\nolimits \_{k = 0}^{2n-1} c^i\_{k, m, x}t^k, &{} t\_{m-1} \le t \le t\_{m }\\ \end{array}\right. } \end{aligned}$$] (3) [$$y\_i(t)$$] is defined analogously. Let vector [$$\mathbf {d}\_i$$] contain the unknown coefficients: [$$\begin{aligned} \nonumber \mathbf {d}\_i&= \left[ c^i\_{0, 1, x} \ \ c^i\_{1, 1, x} \ \ ... \ \ c^i\_{2n-1, 1, x} \ \ c^i\_{0, 2, x} \ \ c^i\_{1, 2, x} \ \ ... \ \ c^i\_{2n-2, m, x} \ \ c^i\_{2n-1, m, x} \right. \\&\left. c^i\_{0, 1, y} \ \ c^i\_{1, 1, y} \ \ ... \ \ c^i\_{2n-2, m, y} \ \ c^i\_{2n-1, m, y} \right] ^T \end{aligned}$$] (4) We solve for [$$\mathbf {d}\_i$$] using a Quadratic Program (QP) that minimizes the quadratic cost functional: [$$\begin{aligned} J\_i = {\int \_{t\_0}^{t\_{m}} \left| \left| \frac{d^n}{dt^n} { {\gamma }\_i(t) } \right| \right| \_2^2 dt} \end{aligned}$$] (5) subject to: 1. 1. Waypoint constraints: Trajectory begins and ends at rest at start and goal.   2. 2. Continuity constraints: Trajectory is at least [$$\mathcal {C}^{n-1}$$] everywhere.   3. 3. Safety constraints: All robots’ trajectories are mutually collision-free.   4. 4. Workspace constraints: Trajectory remains in the given workspace.   Details of the QP problem formulation can be found in [17]. This proposed algorithm is guaranteed to be safe and complete. 3 Experimental Architecture We apply our algorithm to a team of quadrotors. Adopting the dynamic model presented by Mellinger et al. [18], let [$$\{ \mathbf {e}\_1, \mathbf {e}\_2, \mathbf {e}\_3\}$$] be unit coordinate axes of an inertial reference frame and [$$\{ \mathbf {b}\_1, \mathbf {b}\_2, \mathbf {b}\_3\}$$] be a body frame fixed to a robot. Let m represent the robot’s mass, [$$\mathcal {I}$$] represent the inertia matrix, and g represent the gravity constant. Let [$$\mathbf {r} \in \mathbb {R}^3$$] denote the position of the robot’s center of mass, [$$R \in SO(3)$$] denote the world-to-body rotation matrix describing its orientation, and [$$\mathbf {\Omega } \in \mathbb {R}^3$$] denote its body-frame angular velocity. Finally, the inputs are represented by a thrust force of magnitude [$$f \in \mathbb {R}$$] and a moment vector [$$\mathbf {M} \in \mathbb {R}^3$$] expressed in body-frame components. The vehicle dynamics are given by: [$$\begin{aligned} \nonumber m(\ddot{\mathbf {r}}+g\mathbf {e}\_3)&= fR\mathbf {e}\_3 \\ \mathcal {I} \dot{\mathbf {\Omega }} + \mathbf {\Omega } \times \mathcal {I} \mathbf {\Omega }&= \mathbf {M} \end{aligned}$$] (6) Robots navigate using the controller in [19]. The desired thrust is: [$$\begin{aligned} f = (k\_x \mathbf {e}\_x + k\_v \mathbf {e}\_v + m g \mathbf {e}\_3 + m \ddot{\mathbf {r}}\_d) \cdot R \mathbf {e}\_3 \equiv \mathbf {f}\_{des} \cdot R \mathbf {e}\_3 \end{aligned}$$] (7) [$$\mathbf {e}\_x$$] and [$$\mathbf {e}\_v$$] are the position and velocity errors, respectively, [$$k\_x$$] and [$$k\_v$$] are positive gains, and [$$\ddot{\mathbf {r}}\_d$$] is a feed-forward acceleration. The desired moments are: [$$\begin{aligned} \mathbf {M} = k\_R \mathbf {e}\_R + k\_\mathbf {\Omega } \mathbf {e}\_\mathbf {\Omega } + \mathbf {\Omega } \times \mathcal {I} \mathbf {\Omega } - \mathcal {I} \left( \hat{\mathbf {\Omega }} R^T R\_c \mathbf {\Omega }\_c - R^T R\_c \dot{\mathbf {\Omega }}\_c \right) \end{aligned}$$] (8) [$$\mathbf {e}\_R$$] and [$$\mathbf {e}\_\mathbf {\Omega }$$] are attitude and angular velocity error terms, respectively, [$$k\_R$$] and [$$k\_\mathbf {\Omega }$$] are positive gains, and [$$\dot{\mathbf {\Omega }}\_c $$] is a commanded angular velocity calculated from the desired trajectory. This controller is provably exponentially stable as long as the initial attitude error is less than [$$\pi /2$$]. Further details can be found in [19]. Mellinger et al. [18] show that quadrotors are [$$4^{th}$$] order differentially flat systems, with flat outputs [$$[\mathbf {r} \ \ \psi ]^T$$], where [$$\psi $$] is the yaw angle of the quadrotor. As a result, a set of sufficiently smooth time-parametrized trajectories for x, y, z and [$$\psi $$] of the quadrotor is sufficient to calculate the feed-forward and commanded terms necessary for the controller. Given a labeled multi-robot planning problem, we use the algorithm described in Sect. 2, with [$$n = 4$$], to generate minimum-snap trajectories in the x and y dimensions. We then maintain [$$\psi = 0$$] and a constant altitude z for all time. [] Fig. 3. Ascending Technologies Hummingbird quadrotor used for experiments. Our experimental platform is the Ascending Technologies Hummingbird¹, pictured in Fig. 3. The robot has a rotor-tip-to-rotor-tip distance of 54 cm. Figure 4 illustrates the architecture of our experimental testbed. Each robot carries an ODROID XU3 computer, a wireless ethernet adaptor, and a USB serial adaptor. This approach is a departure from previous multi-robot experiments, which used ZigBee modules for communication and lacked the ability to communicate rich or large amounts of information (ex. images). The position and velocity of each robot is obtained from a Vicon² motion capture system at 100 Hz. The robots are operating in a workspace of about 3.5 m by 3.5 m in the xy-directions and 1.5 m in height. The total mass of the quadrotor, with onboard equipment, is 703 g. [] Fig. 4. Architecture of the experimental system. A base station runs our trajectory planner along with a trajectory tracker and controller for each robot. A desired force vector ([$$\mathbf {f}\_{des}$$]) and attitude ([$$R\_{des}$$]) are computed and sent to the robot’s low-level controller. Finally, the state of the robot is observed using a motion capture system. 4 Results In this section, we present results from our experiments, with video at https://​youtu.​be/​XfC6zvE1uIc. For these experiments, robots were flown at a constant altitude of 1.5 m, and we only analyze motion in the plane. 4.1 Experimental Verification of Proposed Algorithm We first validate our algorithm’s ability to generate safe trajectories for the team under different geometries of start and goal locations. We executed solution trajectory sets to four different problems for a four-robot team. Each scenario demands different subsets of the team to enter a CHOP to safely navigate to their goals. These examples are pictured in Figs. 5a–d. [] Fig. 5. Execution of four-robot trajectory sets, single trial. Robots must navigate from start positions, circles, to assigned goals, stars of the same color. In Fig. 5a, while robots’ straight-line paths intersect, their trajectories’ time parameterizations are such that they do not collide and no holding patterns are necessary. Figure 5b shifts the goal of the red robot. As a result, the red and blue robots must navigate around each other. Figures 5c and d show three and four robots in a holding pattern, respectively. As can be expected, holding patterns involving more robots take up more of the available workspace. Figure 6a displays the magnitude of the robots’ velocities in the plane (“forward velocity”) over time for the fastest robot in each solution set. In each set, a robot exceeds 1 m/s, and in the most aggressive case, a robot reaches a maximum velocity of almost 1.6 m/s. [] Fig. 6. Performance statistics for four-robot solution sets, single trial. We reliably executed each solution set over five trials. Figure 6b plots the magnitude of the position error in the plane (“planar error”) averaged across all robots within one representative trial. Figure 6c reports the minimum tip-to-tip separation between robots. The average position error is consistently less than 6 cm, and in each scenario, robots’ propellers come within 46 cm (0.85 body-lengths). 4.2 Robustness to Unmodeled Dynamics Next, we test the robustness of our system to unmodeled dynamics by observing trajectory tracking error when executing various holding patterns. This is an indicator of the planned trajectories’ optimality with respect to robot dynamics. Table 1. Trajectory errors for unmodeled dynamics experiments over five trials. +------------+----------+-----------------+---------------+---------------+---------------+ | Experiment | Robots | Max. vel. (m/s) | Min. sep. (m) | Avg. err. (m) | Max. err. (m) | +:===========+:=========+:================+:==============+:==============+:==============+ | 1 | 3 robots | 0.61 | 0.69 | 0.045 | 0.12 | +------------+----------+-----------------+---------------+---------------+---------------+ | 2 | 3 robots | 1.4 | 0.70 | 0.047 | 0.12 | +------------+----------+-----------------+---------------+---------------+---------------+ | 3 | 3 robots | 1.9 | 0.67 | 0.053 | 0.14 | +------------+----------+-----------------+---------------+---------------+---------------+ | 4 | 5 robots | 1.1 | 0.28 | 0.039 | 0.14 | +------------+----------+-----------------+---------------+---------------+---------------+ Table 1 characterizes CHOPs with the maximum forward velocity and the minimum separation between robots’ propellers. We report planar error averaged across all robots in all trials and the maximum error of any robot over all trials. Based on the three-robot experiments, as the robots’ maximum velocities increase, both the average and maximum errors increase. This is likely caused by both the increased difficulty in trajectory tracking at higher velocities and the increase in aerodynamic disturbances from neighboring vehicles as propeller speeds increase. In the most aggressive holding pattern, robots reach maximum velocities of almost 2 m/s. However, the average trajectory error is still never greater than 6 cm. Further, the maximum tracking error is less than 15 cm. [] Fig. 7. Five robots executing the holding pattern in Table 1, Experiment 4. We were also able to successfully execute a CHOP with five robots, as shown in Fig. 7, safely within the workspace limits. Even with neighboring propellers coming within 28 cm (0.52 body-lengths) of each other, the average and maximum errors are still less than 4 cm and 15 cm, respectively. Figures 7a–d show snapshots of the quadrotors in flight. Further, Fig. 7e plots the magnitude of the position error averaged across all robots for each trial. The error is generally around 2–3 cm and always less than 8 cm. Figure 7f displays the minimum rotor-tip-to-rotor-tip distance between any pair of robots. While Table 1 reflects little difference in average errors across experiments, we see from the time-based plots that there is in fact an increase in position error as robots move closer to each other. 5 Application: Avoiding Downwash in Three-Dimensional Workspaces for Improved Trajectory Tracking We finally apply our planar multi-robot trajectory generation algorithm to a three-dimensional workspace. A tempting solution to the labeled planning problem for quadrotors is to simply stagger the vehicles’ altitudes and allow them to move directly to their designated goals. While this is theoretically possible, the downwash from robots at higher altitudes could perturb robots at lower altitudes in unpredictable ways, potentially making this solution unsafe. In fact, Mellinger et al. [20] characterize the downwash effect for Hummingbird quadrotors to be “most concentrated in a cylindrical region with a radius of approximately 0.5 m extending to a height of 1.5 m below the quadrotor. This cylindrical region bounds the volume where the z displacement error is greater than 5 cm.” Thus, even when executing trajectories at different altitudes, it could be advantageous for quadrotors to avoid flying directly below neighbors. We conducted experiments with teams of two and three robots. We assign robots’ start and goal positions in the full three-dimensional space. Each robot’s start and goal are at the same altitude and no pair of robots share starts or goals with the same planar coordinates. First, we allow robots to take minimum-snap straight-line trajectories to their goals. Next, we apply our proposed algorithm as if all robots were operating on a common plane. Each robot then executes this CHOP trajectory at its originally assigned altitude. In this way, we use the CHOP to ensure that vehicles do not “collide” with the downwash of their neighbors. We executed five trials of each trajectory set. [] Fig. 8. Execution of straight-line trajectories vs. holding pattern, single trial. Table 2. Vertical error for three-dimensional experiments over five trials. +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ | Robots | Max. vel. (m/s) | Vert. sep. (m) | Not single CHOP errors | | Single CHOP errors | | +:=======+:================+:===============+:=======================+:=========+:===================+:=========+ | | | | Avg. (m) | Max. (m) | Avg. (m) | Max. (m) | +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ | 2 | 1.3 | 0.90 | 0.020 | 0.17 | 0.016 | 0.050 | +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ | 3 | 1.3 | 1.0 | 0.026 | 0.27 | 0.017 | 0.12 | +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ Figure 8 displays the actual trajectories from one representative trial of each experiment. Figures 8a–b show tracking of the straight-line trajectories. In each case, there is a significant disturbance in the vertical direction for robots at lower altitudes. Figures 8c–d show the proposed CHOP trajectories. The vertical trajectory error significantly decreases. Table 2 lists the maximum forward velocities and vertical separations between robots for each experiment, along with error statistics averaged over five trials of each trajectory set. For each experiment, the vertical separation between robots was about 1 m, within the 1.5 m height of the disturbance region characterized by Mellinger et al. [20]. In each case, the maximum vertical error decreases dramatically by over 10 cm after the introduction of the CHOP to the robots’ trajectories. The average vertical error decreases as well. Figure 9 displays the absolute value of the vertical error over time for a representative trial of each experiment. Figure 9a shows that when two robots are executing straight-line trajectories, the higher robot’s vertical error remains approximately 4 cm. However, the lower robot has a large vertical error at around 4 s, when it crosses paths with its neighbor. When robots execute a CHOP, the lower robot’s vertical error decreases to around 4 cm throughout its trajectory. Similarly, Fig. 9b shows that in the three-robot experiment, while executing straight-line trajectories, the middle and lower robots have large vertical errors at around 4 s. The middle robot is perturbed around 15 cm, while the lower robot, subject to disturbances from both neighbors above it, is perturbed vertically by over 25 cm. However, when all three robots execute a single CHOP, vertical errors for all robots remain at below 5 cm throughout their trajectories. [] Fig. 9. Absolute vertical error over time in three-dimensional experiments, single trial. Table 3 lists the average and maximum planar errors of robots in each experiment over five trials. We confirm that coordination of all robots in a CHOP does not significantly affect planar errors. Table 3. Planar error for three-dimensional experiments over five trials. +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ | Robots | Max. vel. (m/s) | Vert. sep. (m) | Not single CHOP errors | | Single CHOP errors | | +:=======+:================+:===============+:=======================+:=========+:===================+:=========+ | | | | Avg. (m) | Max. (m) | Avg. (m) | Max. (m) | +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ | 2 | 1.3 | 0.90 | 0.040 | 0.10 | 0.037 | 0.099 | +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ | 3 | 1.3 | 1.0 | 0.042 | 0.14 | 0.038 | 0.16 | +--------+-----------------+----------------+------------------------+----------+--------------------+----------+ Overall, these results suggest that for quadrotor teams, simply staggering altitudes of vehicles might not be a realistic solution because of the large vertical perturbations that occur as robots cross their neighbors. As a result, coordination between vehicles might still be necessary, particularly when the vertical separation between robots is not large enough to safely allow for deviations. Experiments show that our algorithm is a feasible solution to this problem. 6 Conclusions In this work, we present experimental validation of a centralized labeled multi-robot planning and trajectory generation algorithm. Unlike methods that model robots as kinematic agents, our algorithm returns optimal trajectories with respect to the robots’ dynamics. Experiments verify that solution trajectories are safe and practical for a quadrotor team. We further show that robots can robustly track these trajectories even when operating in close proximity. Finally, we demonstrate the successful application of our algorithm to three-dimensional workspaces. We believe these experimental insights will allow for development of larger, more complex quadrotor teams that may be used for tasks such as package delivery. Acknowledgments We gratefully acknowledge the support of ONR grants N00014-09-1-1051 and N00014-09-1-103, ARL grant W911NF-08-2-0004, ARO grant W911NF-13-1-0350, and Exyn Technologies. Sarah Tang is supported by NSF Research Fellowship Grant No. DGE-1321851. References 1. Enright, J.J., Wurman, P.R.: Optimization and coordinated autonomy in mobile fulfillment systems. In: AAAI Conference on Artificial Intelligence (2011) 2. Forbes: Meet amazon prime air, a delivery-by-aerial-drone project, December 2013 3. Turpin, M., Mohta, K., Michael, N., Kumar, V.: Goal assignment and trajectory planning for large teams of interchangeable robots. Auton. Rob. 37(4), 401–415 (2014)CrossRef 4. Goldenberg, M., Felner, A., Stern, R., Sharon, G., Sturtevant, N., Holte, R.C., Schaeffer, J.: Enhanced partial expansion A\*. J. Artif. Intell. Res. 50(1), 141–187 (2014)MathSciNetMATH 5. Wagner, G., Choset, H.: Subdimensional expansion for multirobot path planning. Artif. Intell. 219, 1–24 (2015)MathSciNetCrossRefMATH 6. Yu, J., Rus, D.: An effective algorithmic framework for near optimal multi-robot path planning. In: The International Symposium on Robotics Research (ISRR) (2015) 7. Luna, R., Bekris, K.E.: Push and swap: Fast cooperative path-finding with completeness guarantees. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), pp. 294–300 (2011) 8. Bennewitz, M., Burgard, W., Thrun, S.: Finding and optimizing solvable priority schemes for decoupled path planning techniques for teams of mobile robots. Rob. Auton. Syst. 41(2), 89–99 (2002)CrossRef 9. Desaraju, V., How, J.P.: Decentralized path planning for multi-agent teams in complex environments using rapidly-exploring random trees. In: Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 4956–4961 (2011) 10. Wiktor, A., Scobee, D., Messenger, S., Clark, C.: Decentralized and complete multi-robot motion planning in confined spaces. In: Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1168–1175 (2014) 11. Clark, C.M., Bretl, T., Rock, S.M.: Applying kinodynamic randomized motion planning with a dynamic priority system to multi-robot space systems. In: Proceedings of the 2002 IEEE Aerospace Conference, March 2002 12. Pallottino, L., Scordio, V.G., Frazzoli, E., Bicchi, A.: Decentralized cooperative policy for conflict resolution in multi-vehicle systems. IEEE Trans. Rob. 23, 1170–1183 (2007)CrossRef 13. Alonso-Mora, J., Naegeli, T., Siegwart, R., Beardsley, P.: Collision avoidance for aerial vehicles in multi-agent scenarios. Auton. Rob. 39(1), 101–121 (2015)CrossRef 14. Omidshafiei, S., Agha-mohammadi, A., Amato, C., How, J.P : Decentralized control of partially observable markov decision processes using belief space macro-actions. In: IEEE International Conference on Robotics and Automation (ICRA) (2015) 15. Mohta, K., Turpin, M., Kushleyev, A., Mellinger, D., Michael, N., Kumar, V.: QuadCloud: a rapid response force with quadrotor teams. In: Hsieh, M.A., Khatib, O., Kumar, V. (eds.) Experimental Robotics. STAR, vol. 109, pp. 577–590. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-23778-7\_​38 CrossRef 16. Tang, S., Kumar, V.: A complete algorithm for generating safe trajectories for multi-robot teams. In: International Symposium on Robotics Research (2015) 17. Tang, S., Kumar, V.: Safe and complete trajectory generation for large teams of robots with higher-order dynamics. In: Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016 18. Mellinger, D., Kumar, V.: Minimum snap trajectory generation and control for quadrotors. In: Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 2520–2525 (2011) 19. Lee, T., Leok, M., McClamroch, N.H.: Control of complex maneuvers for a quadrotor UAV using geometric methods on SE(3). Asian J. Control Vol. 15, 391–408 (2011)CrossRefMATH 20. Michael, N., Mellinger, D., Lindsey, Q., Kumar, V.: The GRASP multiple micro-UAV testbed. IEEE Rob. Autom. Mag. 17(3), 56–65 (2010)CrossRef Footnotes 1 http://​www.​asctec.​de/​en/​.   2 http://​www.​vicon.​com/​.   Grasping 2 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_52 Using Vision for Pre- and Post-grasping Object Localization for Soft Hands Changhyun Choi¹  , Joseph Del Preto¹   and Daniela Rus¹   (1) Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA     Changhyun Choi (Corresponding author) Email: cchoi@csail.mit.edu URL: http://people.csail.mit.edu/cchoi/vision4softhand/   Joseph Del Preto Email: delpreto@csail.mit.edu URL: http://people.csail.mit.edu/cchoi/vision4softhand/   Daniela Rus Email: rus@csail.mit.edu URL: http://people.csail.mit.edu/cchoi/vision4softhand/ Abstract In this paper, we present soft hands guided by an RGB-D object perception algorithm which is capable of localizing the pose of an object before and after grasping. The soft hands can perform manipulation operations such as grasping and connecting two parts. The flexible soft grippers grasp objects reliably under high uncertainty but the poses of the objects after grasping are subject to high uncertainty. Visual sensing ameliorates the increased uncertainty by means of in-hand object localization. The combination of soft hands and visual object perception enables our Baxter robot, augmented with soft hands, to perform object assembly tasks which require high precision. The effectiveness of our approach is validated by comparing it to the Baxter’s original hard hands with and without the in-hand object localization. Keywords Soft handsSoft gripperIn-hand object localizationPose estimationRobotic assemblyVision-guided manipulation 1 Motivation and Related Work An important prerequisite for object manipulation is estimating the pose of an object and coping with the uncertainty of the pose estimates. Various sensing modalities, such as proprioception [1, 2], visual exteroception [3, 4], and contact/force sensing [5] have been employed. Visual sensing allows passive perception as it does not require contact, and is thus useful in the pre-grasping phase. Tactile, contact, force, and proprioceptive sensing modalities are useful when robots interact with objects in the post-grasping phase. The pose of a grasped object can be quite uncertain as the act of grasping tends to move the object and increase the uncertainty. Many prior works have combined vision and contact to decrease uncertainty [6–13]. Soft grippers are more compliant and easier to control than their hard counterparts [14–17]. The flexible materials of soft hands enable compliance with discrepancy between their belief space and the real environment; this compliance allows soft hands to be more tolerant of errors in the pose estimates of objects. Softness, however, often reduces the confidence of the object state in the gripper since the pose of the object is more uncertain due to the flexibility of the soft fingers. In-hand object localization is thus needed for advanced object manipulations requiring accurate pose. The goal of this paper is to develop a reliable object manipulation system using soft hands and visual pose feedback and to evaluate its effectiveness. Figure 1a illustrates our system setup in which the Baxter robot is augmented with two soft hands and an RGB-D sensor. We use vision for localizing objects presented to the robot on a tabletop and then determining the pose of a grasped object in the hand. Figure 1b shows one of our soft hands, which is composed of four pneumatically actuated fingers [2]. An RGB-D sensor is employed to localize objects in the workspace of the robot in the pre-grasping phase and to detect soft fingers and a grasped object in the post-grasping phase. Our approach does not rely on proprioceptive force sensing, yet it is capable of assembly operations requiring precision. To the best of our knowledge, this is the first attempt to use a vision-based object localization for soft hands capable of assembly tasks. [] Fig. 1. System overview. Our system is composed of the Baxter robot augmented with two soft hands and an RGB-D sensor. Assembly parts are randomly placed on the table, so the positions and orientations of the parts are unknown. The RGB-D sensor localizes the parts on the table and inside the hand during the pre-grasping and post-grasping phases, respectively. The RGB channels are used for identification of the soft fingers, while the depth channel is employed for depth-based object localization. This paper is organized as follows. We explain the details of our technical approach in Sect. 2, wherein the problem statement and object localization algorithms are presented. Section 3 describes the experimental setup and results of two experimental tasks. Section 4 discusses concluding remarks and future work. [] [] Fig. 2. Post-grasping object localization. The hand regions (white in b) are estimated from a Gaussian naive Bayes classification on the hue and saturation channels from the RGB channels (a) in the RGB-D sensor. The detected finger regions are then ignored in the depth-based object localization (c, d). The red wireframes show the localized block part. (Best viewed in color) [] 2 Technical Approach Problem Statement: Given objects randomly placed a tabletop, we wish to enable a robot to grasp an object and connect it to another object on the table using vision as feedback. The robot is assumed to have soft hands. The objects have a known geometry [$$\mathcal {M}$$]. The stable grasp poses for the objects [$$\mathcal {X}^{o}\_{e}\subset SE({3})$$] and the extrinsic calibration of the RGB-D sensor [$$\mathbf {X}^{w}\_{c}\in SE({3})$$] are assumed to be known. 2.1 Pre-grasping Object Localization The pre-grasping object localization estimates the poses of the objects on a planar table before the robot executes grasping, allowing for a pre-computed stable grasp to be realized. Algorithm 1 presents the pre-grasping object localization procedure. It takes the depth image [$$\mathcal {D}$$] from an RGB-D sensor and a set of object models [$$\mathcal {M}$$] as input, and then it returns a set of object poses [$$\widehat{\mathcal {X}}\subset SE({3})$$] and their associated likelihoods [$$\widehat{\mathcal {L}}\subset \mathbb {R}^{+}$$] and object indices [$$\widehat{\mathcal {O}}\subset \mathbb {N}$$]. We assume a table-top manipulation scenario where objects are placed on a planar table. This assumption allows the robot to segment foreground objects from the planar background. The function [] first fits the plane model to the point cloud [$$\mathcal {D}$$]. Foreground point clouds of the objects are then clustered, and a set of center positions of the objects [$$\mathcal {S}\subset \mathbb {R}^{3}$$] is estimated. The Iterative Closest Point (ICP) algorithm [18] is sequentially executed on each center position. As the unknown orientation of each object is constrained by the table, a set of in-plane rotations [$$\mathcal {R}$$] with the step [$$\varDelta $$] is considered. Hence the initial pose for the ICP algorithm is set as [] where [$$r \in SO({3})$$] and [$$s \in \mathbb {R}^3$$]. Among the multiple ICP executions, the optimal pose estimate [$$\widehat{\mathbf {X}}$$] with the most likelihood [$$\hat{ l }$$] is chosen for each point cloud cluster s. It is worth noting that the depth image [$$\mathcal {D}$$] is in the camera coordinate frame, while the initial pose and the optimal pose estimate are with respect to the world coordinate frame. To transform between these two coordinate frames, the extrinsic calibration of the sensor [$$\mathbf {X}^{w}\_{c}$$] is estimated offline. The stable grasp poses for the objects [$$\mathcal {X}^{o}\_{e}\subset SE({3})$$] are also assumed to be known a priori, and thus a set of grasping poses for each object is accordingly calculated from the object pose estimates [$$\widehat{\mathcal {X}}$$]. Once this is done, the robot can be commanded to the desired pose and the grasp can be executed. 2.2 Post-grasping Object Localization A challenge with visual in-hand object localization (IOL) is the occlusions caused by the grasping fingers. The performance of registration algorithms such as ICP is often deteriorated by occlusions. It is thus important to remove the regions of the fingers before running the registration algorithms. Traditionally, reasoning about finger locations has been done through model-based approaches where an articulated shape model is rendered with the current state of joints [13]. However, the deformation of a soft finger is nonlinear so model-based approaches are difficult to derive and often too computationally intensive to use in real-time. Furthermore, the deformation of a soft finger varies depending on the grasped object shape and the contact points between the finger and the surface of the object. To address these issues, we adopted a data-driven approach in which a binary naive Bayes classifier [19] is trained to detect the fingers using the color data from the RGB-D sensor. Algorithm 2 presents the post-grasping object localization procedure. The RGB [$$\mathcal {I}$$] and depth [$$\mathcal {D}$$] images along with the grasped object model [$$ m \in \mathcal {M}$$] and grasped pose [$$\mathbf {X}^{o}\_{e}\in \mathcal {X}^{o}\_{e}\subset SE({3})$$] are given, and the algorithm returns the refined object pose [$$\widehat{\mathbf {X}}$$] with its likelihood [$$\widehat{ l }$$]. The function [] detects the soft hand regions [$$\mathcal {H}$$] via the naive Bayes classifier. To train the classifier, [$$\mathcal {D}$$] from the sensor is employed to segment the soft fingers and the background, and the color distributions of the soft finger and background regions are used as positive and negative training data, respectively. We adopted the HSV color space and used H (hue) and S (saturation) channels for better invariance to brightness changes. The color distributions for both the positive and the negative examples are modeled by the mixture of Gaussians. The white area in Fig. 2b shows the soft hand regions [$$\mathcal {H}$$] detected from the trained naive Bayes classifier. The region [$$\mathcal {H}$$] is then used as an erasing mask for [$$\mathcal {D}$$] so that the in-hand object localization is done on the depth image without the hand regions [$$\acute{\mathcal {D}}$$]. The initial object pose [$$\mathbf {X}^{w}\_{o}$$] in the world coordinate frame [$$ w $$] is estimated from the end effector pose [$$\mathbf {X}^{w}\_{e}$$] and the grasping pose [$$\mathbf {X}^{o}\_{e}$$]. Figure 2c and d show the red wireframes of the object model [$$ m $$] with the refined pose [$$\widehat{\mathbf {X}}$$] on the depth and surface normal images, respectively. 3 Experiments We augmented the Baxter with two hands of four soft fingers each as shown in Fig. 1a. We tasked the robot to pick up one block with one hand then connect it to the other block on the tabletop. Figure 3 shows the block assembly procedure and the step-wise stages of the assembly. Once the Baxter grasps the block object, it re-localizes the block in the hand with the in-hand object localization (IOL). It then approaches to the top of the second block on the table and connects the grasped block to the block on the table. To make sure that the blocks are well inserted together, it lifts the assembled blocks. If the two blocks are lifted together, the assembly task is successful, otherwise unsuccessful. The rightmost column of Fig. 4 shows some successful and unsuccessful examples. [] Fig. 3. Block assembly. The assembly procedure (a) is to connect the block [$$\mathcal {M}\_1$$] to the other block [$$\mathcal {M}\_0$$] with the relative pose [$$\overline{\mathbf {X}}^0\_1 \in SE(3)$$]. The sequence of figures (b) shows that the Baxter grasps the block with its left soft hand and inserts it on the other block. The distance between fingers for both hand types is about 13 cm (c). To investigate the effectiveness of the soft hands and the post-grasping object localization, we compare the two aspects: hard gripper vs. soft gripper, and with and without the IOL. There are thus four different configurations considered in this experiments: 1. 1. The hard gripper without the IOL (H): This configuration is using the original hard gripper of the Baxter, and not using the post-grasping object localization. This configuration serves as a baseline for comparative experiments in the following subsections.   2. 2. The hard gripper with the IOL (HI): It uses the original hard gripper, but it localizes the object after grasping.   3. 3. The soft gripper without the IOL (S): It uses the soft hand instead of the hard one, but does not localize objects after grasping.   4. 4. The soft gripper with the IOL (SI): This configuration uses the soft hand and the IOL to localize the object in the hand. It is the configuration of our system.   We investigate the effectiveness of soft hands by comparing the hard hands (H, HI) and the soft hands (S, SI) respectively. The effectiveness of the IOL can also be seen by comparing the performance with (HI, SI) and without (H, S) the IOL. We compare these four configurations in the two evaluation scenarios as follows: 1. 1. Evaluation with respect to artificial Gaussian noise in object pose   2. 2. Evaluation of the complete system.   Detailed experimental settings are explained in the subsequent sections. [] Fig. 4. A grasping and insertion example of the four configurations. The same Gaussian noise was added to the four configurations, yet the outcome of the insertion task is different. The added noise in this experiment was not enough to cause grasping failure, but it does cause insertion failure unless the IOL is used. Thus, using the IOL makes insertion more robust to pose errors. 3.1 Robustness to Object Pose Noise The purpose of this experiment is to compare the robustness and accuracy of the object manipulation with respect to the noise in object pose. The considered manipulation tasks include grasping and insertion. The success of such manipulation depends on object pose estimates which are calculated by the pre- and post-grasping object localization. The successful manipulation also depend on the arm trajectories solved from the motion planning algorithm. In order to evaluate the differences between the four configurations (H, HI, S, and SI), we minimize extraneous sources of error by maintaining a consistent configuration. As shown in Fig. 4, we fix the poses of the two blocks on the table and the same pose estimates are used for the four configurations. In order to ensure consistent object locations, two sheets of white paper were affixed to the table and the blocks were carefully aligned before each trial. The robot is tasked to pick up the left block and connects it to the right block. In order to get the beginning poses of the blocks, we execute the pre-grasping object localization multiple times (100 times in our experiment) and calculate the mean of the multiple pose estimates.¹ It turns out that the accuracy of our pre-grasping object localization algorithm shows sub-millimeter and sub-degree uncertainties in translation and rotation, respectively.² As the uncertainty of the pre-grasping localization is not significant, we add artificial noise to the object poses for the following evaluation. To evaluate the effectiveness of the four configurations with respect to the uncertainty in object pose, we add artificial Gaussian noise to the mean pose estimates. For fair comparison, we generate a series of Gaussian noise in the object pose and use the same noise series with each of the four configurations. As the block objects are on the table, we add Gaussian noise on the plane by adding in [$$\mathbf {x}$$], [$$\mathbf {y}$$], and [$$\varvec{\theta }$$]: [$$\begin{aligned} \mathbf {X}\_i&= \overline{\mathbf {X}}\widetilde{\mathbf {X}}\_i \end{aligned}$$] (1) where [$$\mathbf {X}\_i \in SE({3})$$] is the noise-perturbed pose and [$$\overline{\mathbf {X}} \in SE({3})$$] is the noise-free pose obtained by the mean of the multiple pose estimates from the pre-grasping object localization. The noise [$$\widetilde{\mathbf {X}}\_i \in SE({3})$$] is sampled from Gaussian distributions as follows: [$$\begin{aligned} \widetilde{\mathbf {X}}\_i&= \begin{pmatrix} \mathbf {R}\_z(\theta ) &{} \mathbf {t}\\ \mathbf {0} &{} 1 \end{pmatrix} \end{aligned}$$] (2) where [] is the 3D rotation along the [$$\mathbf {z}$$]-axis and [$$t\_x, t\_y \sim \mathcal {N}(0, \sigma \_t^2)$$] where [$$\mathbf {t}= (t\_x, t\_y, 0)^{^{\intercal }}$$] is the translation from the center of the noise-free object pose.³ In our experiment, we used [$$\sigma \_t = 20{\,mm}$$] and [$$\sigma \_r = 20^{\circ }$$]. Figure 4 shows a grasping and insertion trial of the four configurations with the same Gaussian noise. The Gaussian noise for this trial is −13.5 mm in [$$\mathbf {x}$$]-axis, 0.6 mm in [$$\mathbf {y}$$]-axis, and [$$-12.7^{\circ }$$] along [$$\mathbf {z}$$]-axis. The error is small enough that all configurations are successful in grasping the left yellow block. For the insertion task, however, we can notice that the configurations with the IOL (HI, SI) are successful, while those without the IOL (H, S) are not accurate enough for the task. The insertion task requires much tighter tolerance than the grasping task, and hence enhancing the uncertainty of the object pose in the hand is crucial for such insertion task. [] Fig. 5. Plots of the results with respect to Gaussian noise in the object pose. Each of the four configurations was executed 50 times with a series of pre-generated Gaussian noise in the block object pose. Each square represents one trial in which its location and orientation depict the Gaussian noise in the translation ([$$\mathbf {x}$$] and [$$\mathbf {y}$$]) and the orientation ([$$\theta $$]). A white square means unsuccessful to grasp the object; a lightly shaded blue square represents successful grasping but failure of assembly, while the dark blue square shows successful grasping and assembly. The symbol [$$\varvec{+}$$] represents the origin of the object coordinate frame. Figure 5 presents the plots of the grasping and assembly results with respect to Gaussian noise in the object pose. By comparing left and right columns, we notice a significant improvement in the success rate of the assembly operation in both hard and soft hands. As the IOL refines the pose of the objects, the uncertainty of the object in the hand is significantly reduced, and hence the number of successful trials is noticeably increased. Another direction to compare is the hard hand (first row) and soft hand (second row). If we look at the left column showing the results without the IOL, we see a noticeable improvement in both grasping and assembly tasks. It demonstrates the adaptability of the soft hands with respect to pose noise. In the lower plots, even if the block is off about 4 cm in both axes, the soft gripper can grasp the noise-perturbed object, while the hard gripper shows less promising results. The compliance of the soft gripper plays an important role even with the IOL, as can be seen in the right column. These experimental results clearly show the effectiveness of the compliant soft grippers, which are adaptable to the noisy pose estimates, as well as of the IOL. Table 1. Success rates for 50 trials of the Gaussian noise experiment. +--------------------------------------+--------------------+---------+--------------------+---------+ | Measure | Hard hand | | Soft hand | | +:=====================================+:===================+:========+:===================+:========+ | | [$$\lnot $$]IOL(H) | IOL(HI) | [$$\lnot $$]IOL(S) | IOL(SI) | +--------------------------------------+--------------------+---------+--------------------+---------+ | # of Failure | 27 | 23 | 11 | 11 | +--------------------------------------+--------------------+---------+--------------------+---------+ | # of Grasping | 18 | 7 | 26 | 9 | +--------------------------------------+--------------------+---------+--------------------+---------+ | # of Assembly | 5 | 20 | 13 | 30 | +--------------------------------------+--------------------+---------+--------------------+---------+ | Successful grasping[$$^\mathrm{a}$$] | 46% | 54% | 78% | 78% | +--------------------------------------+--------------------+---------+--------------------+---------+ | Successful assembly[$$^\mathrm{a}$$] | 10% | 40% | 26% | 60% | +--------------------------------------+--------------------+---------+--------------------+---------+ [$$^\mathrm{a}$$] The success rate of grasping considers both ‘# of grasping’ and ‘# of assembly’. Table 1 presents the results of the four configurations in terms of the numbers of each result and the success rates. If we compare the hard hands (H, HI) with the soft hands (S, SI), we notice a significant improvement in the success rate of grasping. The success rate of grasping for hard gripper is about 50% on average, while that for soft gripper is almost 80%. Even when the pose estimate of each object was perturbed, the soft gripper tends to adapt to the pose error due to the flexibility in the soft materials. Another notable difference is the success rate of the block assembly between the configurations with and without the IOL. For the hard gripper, the IOL enables the Baxter to assemble the blocks correctly and thus the success rate is four times higher than without the IOL. Similar effects can be found in the soft gripper, where SI shows 60% success rate while S is successful about one fourth. Running the IOL reduces uncertainty in the pose of the in-hand object, and thus it increases the success rate of the block assembly task which requires a tight tolerance. 3.2 Evaluation of the Complete System In the second evaluation, we compare the four configurations in less constrained settings. The setup is similar to the experiment in Sect. 3.1, but the two blocks are randomly placed on the table and the robot randomly picks one of the blocks and connects it to the other block. The Gaussian noise is not added to the pose estimate from the pre-grasping object localization. This setting is to evaluate the complete system with uncertainties from the pose estimate of the object localization algorithms, planning trajectories, robot calibration, etc. Table 2 shows the success rates of the block assembly on the table in the four configurations. As we explained in Sect. 3.1, our pre-grasping object localization returns sub-millimeter accuracy in translation and sub-degree accuracy in rotation. Without the additional Gaussian noise, we notice that grasping the block object is not a challenging problem for both hard and soft hands. It is, however, still challenging for the assembly task. When the hard hand is considered without the IOL, the success rate is only 41%. But the same hand with the IOL improves the success rate of the assembly task to 66%, which is more than 20% improvement. A similar trend can be observed in the soft hand configuration. Without the IOL it is successful in 72% of trials, but using the IOL enables it to succeed in over 90% of trials. If we compare hard and soft hands, we notice that there is about 30% improvement when using soft hands (41% to 72% and 66% to 92%). These results therefore confirm both the effectiveness of using adaptable, flexible soft hands and of using the IOL. Together, they can yield successful manipulation in challenging scenarios. Table 2. Success rates for 100 trials of the complete system experiment. +---------------------+---------------------+----------+---------------------+----------+ | Measure | Hard hand | | Soft hand | | +:====================+:====================+:=========+:====================+:=========+ | | [$$\lnot $$]IOL (H) | IOL (HI) | [$$\lnot $$]IOL (S) | IOL (SI) | +---------------------+---------------------+----------+---------------------+----------+ | Successful grasping | 100% | 100% | 100% | 100% | +---------------------+---------------------+----------+---------------------+----------+ | Successful assembly | 41% | 66% | 72% | 92% | +---------------------+---------------------+----------+---------------------+----------+ 4 Conclusion We proposed an object manipulation approach which provides flexibility through compliant soft hands and dependable accuracy using vision-based localization algorithms. The color and depth channels were effectively employed for soft finger segmentation and object localization, respectively. The object pose in the soft hands is prone to be uncertain due to the flexible deformation of the soft hands. Nevertheless, our in-hand localization approach is effective in mitigating this problem. The compliance of the soft hands is adaptable to the uncertainty in object pose, and thus it is effective for manipulation tasks which require a tight tolerance. For future work, we would like to extend this approach to dual-arm manipulation which is capable of more sophisticated manipulation such as assembling two object parts with two hands in air. This dual-hand manipulation doubles the uncertainties in both hands and objects. We anticipate that this manipulation will be a challenging scenario for which our in-hand object localization and compliant fingers can be very advantageous. Acknowledgement This work was supported by The Boeing Company. The support is gratefully acknowledged. References 1. Mason, M.T., Rodriguez, A., Srinivasa, S.S., Vazquez, A.S.: Autonomous manipulation with a general-purpose simple hand. Int. J. Robot. Res. 31(5), 688–703 (2012)CrossRef 2. Homberg, B.S., Katzschmann, R.K., Dogar, M.R., Rus, D.: Haptic identification of objects using a modular soft robotic gripper. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots Systems (IROS), pp. 1698–1705, September 2015 3. Collet, A., Martinez, M., Srinivasa, S.S.: The MOPED framework: object recognition and pose estimation for manipulation. Int. J. Robot. Res. 30(10), 1284–1306 (2011)CrossRef 4. Choi, C., Christensen, H.I.: Robust 3D visual tracking using particle filtering on the special Euclidean group: a combined approach of keypoint and edge features. Int. J. Robot. Res. 31(4), 498–519 (2012)CrossRef 5. Bimbo, J., Luo, S., Althoefer, K., Liu, H.: In-hand object pose estimation using covariance-based tactile to geometry matching. IEEE Robot. Autom. Lett. 1(1), 570–577 (2016)CrossRef 6. Allen, P.K.: Integrating vision and touch for object recognition tasks. Int. J. Robot. Res. 7(6), 15–33 (1988)CrossRef 7. Hebert, P., Hudson, N., Ma, J., Burdick, J.: Fusion of stereo vision, force-torque, and joint sensors for estimation of in-hand object location. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 5935–5941, May 2011 8. Zhang, L.E., Trinkle, J.C.: The application of particle filtering to grasping acquisition with visual occlusion and tactile sensing. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 3805–3812. IEEE (2012) 9. Ilonen, J., Bohg, J., Kyrki, V.: Fusing visual and tactile sensing for 3-D object reconstruction while grasping. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 3547–3554 (2013) 10. Bimbo, J., Seneviratne, L., Althoefer, K., Liu, H.: Combining touch and vision for the estimation of an object’s pose during manipulation. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots Systems (IROS), pp. 4021–4026, November 2013 11. Chalon, M., Reinecke, J., Pfanne, M.: Online in-hand object localization. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots Systems (IROS), pp. 2977–2984, November 2013 12. Guler, P., Bekiroglu, Y., Gratal, X., Pauwels, K., Kragic, D.: What’s in the container? Classifying object contents from vision and touch. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots Systems (IROS), pp. 3961–3968. IEEE (2014) 13. Schmidt, T., Hertkorn, K., Newcombe, R., Marton, Z., Suppa, M., Fox, D.: Depth-based tracking with physical constraints for robot manipulation. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (2015) 14. Deimel, R., Brock, O.: A compliant hand based on a novel pneumatic actuator. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 2047–2053. IEEE (2013) 15. Stokes, A.A., Shepherd, R.F., Morin, S.A., Ilievski, F., Whitesides, G.M.: A hybrid combining hard and soft robots. Soft Robot. 1(1), 70–74 (2013)CrossRef 16. Deimel, R., Brock, O.: A novel type of compliant and underactuated robotic hand for dexterous grasping. Int. J. Robot. Res. 35(1–3), 161–185 (2016)CrossRef 17. Galloway, K.C., Becker, K.P., Phillips, B., Kirby, J., Licht, S., Tchernov, D., Wood, R.J., Gruber, D.F.: Soft robotic grippers for biological sampling on deep reefs. Soft Robot. 3(1), 23–33 (2016)CrossRef 18. Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)CrossRef 19. Murphy, K.P.: Machine Learning: A Probabilistic Perspective. The MIT Press, Cambridge (2012)MATH 20. Moakher, M.: Means and averaging in the group of rotations. SIAM J. Matrix Anal. Appl. 24(1), 1–16 (2003)MathSciNetCrossRefMATH Footnotes 1 The pre-grasping object localization was run 100 times. For each pose estimate [], the standard deviation of the translation is calculated from the arithmetic mean of the translation [$$\overline{\mathbf {t}} = \sum \_{i}\mathbf {t}\_i/N$$] where N is the number of samples (i.e. [$$N = 100$$]). While the translation vectors [$$\mathbf {t}\_i$$] are in Euclidean space, the rotation matrices are in the special Orthogonal group SO(3). We thus need to take special consideration into this SO(3) space. It is well known that the geodesic metric on SO(3) is the angle between two rotation matrices [$$d(\mathbf {R}\_1, \mathbf {R}\_2)$$] and the valid mean of a set of rotation matrices can be estimated from the geometric mean [20]. The standard deviation in the angle is calculated from the geometric mean.   2 The standard deviations of the (x, y, z) translation in the 100 object pose estimates are (0.14, 0.46, 0.28) mm and (0.18, 0.38, 0.20) mm for the left and right blocks respectively. The standard deviations of the angle distance between each rotation matrix in SO(3) and the mean of the rotation matrices are [$$0.26^{\circ }$$] and [$$0.17^{\circ }$$] for the left and right blocks respectively.   3 The rotational axis of the block object model is the [$$\mathbf {z}$$]-axis.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_53 Grasping and Manipulation by Underactuated Hand with Multi-Joint Fingers Takumi Tamamoto¹  , Soichiro Nomura¹ and Koichi Koganesawa¹   (1) Department of Mechanical Engineering, Tokai University, 4-1-1 Kitakaname, Hiratsuka Kanagawa, 259-1292, Japan     Takumi Tamamoto (Corresponding author) Email: 4btad012@mail.tokai-u.jp   Koichi Koganesawa Email: kogane@keyaki.cc.u-tokai.ac.jp Abstract In our previous study we have developed the seven axes multi joint gripper (MJG) having a variable stiffness mechanism and have showed it achieves some dexterous grasping. In this paper, we discuss a hand having a number of mluti-joint fingers that was subsequently designed on the basis of the former MJG. The mechanism mainly consists of a serially connected differential gear systems controlled by only two actuators: one for driving all of the joints simultaneously and the other for adjusting stiffness of every joints all together. The hand succeeded envelope grasping of various shape objects with no sensory feedback. The experiments also revealed that the hand with three multi-joint finger successfully achieves transition from pinching to envelope grasping. It also discuss how joint stiffness should be set according to handling modes of the hand. Keywords Multi joint handDifferential gearEnvelope graspingVariable stiffness 1 Introduction In our daily life, human hands easily achieve some dexterous handling of objects, especially with adjusting their grasping force according to object’s characteristics such as weight, shape, etc. Meanwhile, as robots are expanding their fields of usage in service industries or home, their end-effector is required to do more dexterous actions. It is recently recognized that adjustability of joint stiffness takes a crucial role for the dexterous grasping in many studies focused on the variable stiffness mechanism (VSM) for the joint [1–3]. The VSM also enhances intrinsic safety against external disturbances due to its backdrivable nature. The authors have developed a prototype machine of multi joint gripper (MJG) with a VSM [4, 5]. One of beneficial points of MJG is to envelope an object by varying its posture according to the object’s shape (envelope grasping), which will assure a soft and stable grasping. This feature has been achieved by the wire and pulley system in some past studies. The wire-drive series of joint was the most typical mechanical architecture in this field [6–10]. However, the wire-drive has a crucial problem; motion of each joint is occasionally unforeseeable because it suffers influences of external load due to inertia of the gripper itself, gravity or friction. To circumvent the problems that the wire-driven MJG has and to achieve more dexterous grasping motions, we have MJG under a new concept of mechanism: a serially chained differential gear system. It also has a VSM that provides a envelope-grasping of unknown shape objects with evenly distributed contact forces. This paper shows our successive approach to develop a hand having a number of MJGs and discusses a way to set the joint stiffness according modes of the hand handling objects. 2 Mechanism In this section we outline the gripper with three mluti-joint finger (MJF) as shown in Fig. 1. The MJFs have been designed on the basis of the 7-joint/1-finger gripper that we presented in 2014 [4]. Each MJF has 4 serially chained rotary axes controlled by only one driving motor. Its also have a Variable Stiffness Mechanism (VSM) controlled by another motor (VSM-motor). [] Fig. 1. Overall appearance of the gripper with three multi-joint fingers The MJFs have a differential gear unit (DGU) in each joint, of which structure is shown in Fig. 2. The DGU is two input and one output system. The angles of the links are determined by the angles of the driving motor ([$$\theta \_{D}$$]) and the base gears ([$$\theta \_{Bi}$$]). The base gears are located at the base of the MJF (see Fig. 1(c)), which receive a passive torque from the individual extension spring that acts for restraining bidirectional rotation as shown in Fig. 2. Therefore the link angle of every joint ([$$\theta \_{Li}$$]) is commonly determined as follows, [$$\begin{aligned} \theta \_{L1} = \frac{Zl^2}{Zl^2 - Zs^2}\theta \_{B1} - \frac{Zs^2}{Zl^2 - Zs^2}\theta \_{D} \end{aligned}$$] (1) [$$\begin{aligned} \theta \_{Li} = \frac{Zs^2}{Zl^2 - Zs^2}\theta \_{Bi} - \frac{Zs^2}{Zl^2 - Zs^2}\theta \_{D} \qquad (i \ge 2) \end{aligned}$$] (2) where i is the joint number counted from the base side. Zl and Zs are the teeth numbers of the large gears ([$$=$$]30) and the small gears ([$$=$$]20) respectively. We only take two teeth numbers for these gears. Therefore, in the condition of the gripper’s weight or inertial force or external force being negligible, which assures the base-gear stays its position ([$$\theta \_{Bi}=0$$]), the motion of the gripper is determined only by the driving motor’s rotation angle. In this condition all joints rotate the same angle simultaneously (see Fig. 1(b)). In case of a link being hampered to rotate ([$$\theta \_{Li}=0$$]) by some external action during the driving motor continuing to rotate, which induces the base gear to rotate accompanied by stretching the springs with applying a passive torque to the link. Therefore the torque exerted by the driving motor must be sufficiently larger than loaded external torque or passive torque due to the spring expansion in order to have the driving motor taking any desired angle. Under satisfying this condition, the motion switches according to the passive torque given to the base-gears and external torque applied to the links. [] Fig. 2. Differential gear unit and base gear system Another prominent feature is that the mechanism constitutes a VSM in every joint. The VSM motor (see Fig. 1(c)) pulls all of the springs concurrently via wires and pulleys. It allows to adjust stiffness of every joints all together. 3 Dynamic Simulation Model In this section, we describe the dynamic simulation model for two MJFs. Figure 3 shows the dynamic simulation model and contact model. The equations of motion of the MJFs, the motors and a rigid body, are derived based on Lagrange’s method shown in Eq. (3). Generalized coordinates [$$\varvec{q}$$] is a vector including the angles of the motor for driving and VSM, the relative angles of each links and coordinate of the object as shown in Eq. (4). [$$\begin{aligned} \frac{d}{dt}\left( \frac{\partial L}{\partial \varvec{\dot{q}}} \right) - \frac{\partial L}{\partial \varvec{q}} + \frac{\partial V}{\partial \varvec{\dot{q}}} = \varvec{Q} ~~~\end{aligned}$$] (3) [$$\begin{aligned} \varvec{q} = \left[ ~\theta \_{D1}~~\theta \_{VSM1}~~\theta \_{D2}~~\theta \_{VSM2}~~\theta \_1~~\cdots ~~\theta \_8 ~~G\_x~~G\_y~~G\_ \theta ~ \right] ^T, \end{aligned}$$] (4) where L is Lagrangian, V is a dissipation energy and [$$\varvec{Q}$$] is a external torque/force vector. V includes damping torque which is proportional to the angular velocity of each joint. The damping coefficient is determined in such a way that the free vibration of actual and simulation becomes almost the same. [] Fig. 3. Dynamic simulation model To treat the fingers contacting an object at a multiple points, we use a penalty method that generates a reaction force proportional to length of a link sinking into the object at a contacting point. Now let us think about the i th link that is approaching an object. We first rotate the model around the reference point [$$\varvec{O}\_i$$] of the link i so that the longitudinal direction of the link is in a horizontal axis. After the rotation, we assess whether the object contacts to the link i or not based on the object coordinates [$$\varvec{G}'$$]. If the following two equations are satisfied the link and object are assumed to be in contact. [$$\begin{aligned} 0< G\_x' < li \end{aligned}$$] (5) [$$\begin{aligned} 0 < r - G\_y', \end{aligned}$$] (6) where li is the length of the link i, r is the object radius. This assessing of contact is performed along the line of all links. A reaction force ([$$|\varvec{R}|$$]) during contact is as follows. [$$\begin{aligned} |\varvec{R}| = k\_G \left( G\_y' - r \right) \end{aligned}$$] (7) The reaction force is considered to take the normal direction of the tangential line of object at a contacting point. [$$k\_G$$] is a penalty modulus. It is determined to [$$7\times 10^3$$][N/m] in such a way that sinking length of the link into an object does not exceed [$$1\%$$] of the representative size of the object. Each reaction force is converted into [$$\varvec{Q}$$] by Eq. (8). [$$\begin{aligned} Q\_i = \sum \_{n=1}^{n\_{max}} det[(\varvec{P}\_n - \varvec{J}\_i) ~~ \varvec{R}\_n] ~~~~~(1 \le i \le 8), \end{aligned}$$] (8) where i is the joint number, n is the identification number of contact points, [$$n\_{max}$$] is the total number of the contact points, [$$P\_n$$] is the contact point coordinates and [$$J\_i$$] is the joint coordinates. In considering the elastic object, the motion and deformation of the object are derived based on nonlinear finite element method as follows, [$$\begin{aligned} \varvec{M}\varvec{\ddot{U}}+\varvec{C}\varvec{\dot{U}}+\varvec{K}\varvec{U}=\varvec{F} \end{aligned}$$] (9) where [$$\varvec{U}$$] is the displacement of nodes. [$$\varvec{F}$$] is a reaction force that each node receives by the penalty method. [$$\varvec{M}$$], [$$\varvec{C}$$], [$$\varvec{K}$$] are mass matrix, damping matrix and stiffness matrix respectively and [$$\varvec{M}$$], [$$\varvec{K}$$] are determined by the coordinates of nodes. We use a Rayleigh Damping to stabilize the calculation of Eq. (9), which is calculated by Eq. (10), where [$$\alpha $$], [$$\beta $$] are the coefficient determined by the damping ratio (=[$$0\sim 1$$]) and the eigenvalue of each cell. [$$\begin{aligned} \varvec{C}=\alpha \varvec{M} + \beta \varvec{K} \end{aligned}$$] (10) 4 Experiment In this section we show the results of experiments and dynamic simulations. The gripper achieved stable envelope grasping adapting to various object shapes as shown in Fig. 4. All of the motions in Fig. 4 are achieved by an identical PID control of rotating three driving motors by the same target angle while three VMS motors holding a specified angle. Figure 5 shows the transition from pinching to envelope grasping. It is also achieved by the same control as the envelope grasping shown in Fig. 4. [] Fig. 4. Envelope grasping of various shapes objects by three MJFs [] Fig. 5. Transition from pinching to envelope grasping [] Fig. 6. Distribution of the contact forces for envelope grasping by two MJFs. Pattern of spring modulus is (a) [] Fig. 7. Distribution of the contact forces for envelope grasping by two MJFs. Pattern of spring modulus is (b) Table 1. Pattern of spring modulus [N/mm] +--------------+-----+-----+-----+-----+ | Joint number | 1st | 2nd | 3rd | 4th | +:=============+:====+:====+:====+:====+ | Pattern (a) | 3.5 | 0.9 | 0.9 | 1.5 | +--------------+-----+-----+-----+-----+ | Pattern (b) | 3.5 | 0.9 | 0.5 | 0.3 | +--------------+-----+-----+-----+-----+ Next we focus on distribution of contact forces. In an envelope grasping, contact forces should be distributed as equally as possible at contact points of which number should be as many as possible. Since it depends on torque balance between the joints, it is expected to improve by adjusting forces exerted by springs of the joints. The spring force in each joint is adjusted by setting the spring modulus individually. Figures 6 and 7 show the distribution of the contact forces in two cases of changing the spring modulus by experiments and dynamic simulations. Table 1 shows the two patterns of spring modulus set in the experiment and simulation. The driving motor rotates with a PID control from 0 to 80 degree, and the VSM motor keeps constant angles. In the case of pattern (a), number of contact points is 4 or 5. In the case of pattern (b), almost all of the links are in contact with the object. [] Fig. 8. Envelope grasping for the elastic body Figure 8 shows the simulation and experimental results of envelope grasping for the elastic body. In the simulation result, strength of the stress applying to element is shown with color variation from yellow (light stress) to red (heavy stress). We observed that the elastic body was evenly deformed by the MJFs’ gripping and we recognize it’s because the internal stress of the object was evenly distributed due to the spring modulus pattern (b). It suggests that a spring modulus pattern that takes the largest value at the joint located in the base side like pattern (b) shows good performance in the meaning that it takes much number of contact points with evenly distributed contact forces. Next we tried to pinch and pull up an object by controlling the driving motor alone. Figure 9 shows the results of experiments under two spring modulus patterns shown in Table 1. The driving motor rotates with a PID control from 0 to 100 degree, and the VSM motor keeps a constant angle. We recognized a normal pinching motion of the object in the case of pattern (a). This experiment shows that it is necessary to alter joint stiffness patterns according to the gripper’s handling modes: pinching or grasping. [] Fig. 9. Pull-in of the object by pinching 5 Conclusions and Future Work To achieve a dexterous envelope grasping with the hand we developed, we tried some graspings and manipulations. One of the results we achieved is envelope grasping of various shape objects by the hand with three MJFs. And transition from pinching to envelope grasping was also achieved. In the experiments focused on the stiffness balance of each joint, we showed different results of object handling when we set two joint stiffness patterns. One of the handling is the envelope grasping with multiple contact points with an object. In this case, it is required for joint located at the base side to be high stiffness. Meanwhile, in another object handling: pinching requires higher stiffness in the joints neighboring the finger tip. From these experiments, we conclude that stiffness balance of joints should be controlled according to object handling mode. Now we are planning to refine the MJFs by incorporating a mechanism for changing the stiffness according to object handling modes. References 1. Koganezawa, K.: A mechanical musculo-skeletal system for a human-shaped robot arm. Actuators 2014(3), 124–141 (2014)CrossRef 2. Schiavi, R., Grioli, G., Sen, S., Bicchi, A.: VSA-II: a novel prototype of variable stiffness actuator for safe and performing robots interacting with humans. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2171–2176, 19–23 May 2008 3. Wolf, S., Hirzinger, G.: A new variable stiffness design: matching requirements of the next robot generation. In: Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1741–1746, 19–23 May 2008 4. Tamamoto, T., Sayama, K., Koganezawa, K.: Multi-joint gripper with differential gear system. In: Proceedings of the IEEE International Conference on Intelligent Robots and Systems, pp. 15–20, 14–18 September 2014 5. Sayama, K., Tamamoto, T., Koganezawa, K.: Multi-joint gripper -control of envelope gripping-. In: Proceedings of the IEEE International Conference on Advanced Intelligent Mechatronics, pp. 336–341, 7–11 July 2015 6. Hirose, S., Umetani, Y.: The development of soft gripper for the versatile robot hand. Mech. Mach. Theory 13–3, 351–359 (1978)CrossRef 7. Dechev, N., Clegjhorn, W.L., Naumann, S.: Multiple finger, passive adaptive grasp prosthesis hand. Mech. Mach. Theory 36(10), 1157–1173 (2001)CrossRefMATH 8. Massa, B., Roccella, S., Carrozza, M.C., Dario, P.: Design and development of an underactuated prosthetic hand. In: Proceedings of the 2002 IEEE International Conference on Robotics and Automation, pp. 3374–3379 (2002) 9. Yamano, N., Takamuku, S., Hosoda, K.: Development of underactuated humanoid robot hand for adaptable grasp. In: Proceeding of the 2008 JSME Conference on Robotics and Mechatronics, No. 08-4 (2008) 10. Wassink, M., Carloni, R., Stramigioli, S.: Port-hamiltonian analysis of a novel robotic finger concept for minimal actuation variable impedance grasping. In: IEEE International Conference on Robotics and Automation, pp. 771–776 (2010) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_54 Generalizing Regrasping with Supervised Policy Learning Yevgen Chebotar¹  , Karol Hausman¹  , Oliver Kroemer¹  , Gaurav S. Sukhatme¹   and Stefan Schaal¹   (1) University of Southern California, Los Angeles, California, USA     Yevgen Chebotar (Corresponding author) Email: ychebota@usc.edu   Karol Hausman Email: hausman@usc.edu   Oliver Kroemer Email: okroemer@usc.edu   Gaurav S. Sukhatme Email: gaurav@usc.edu   Stefan Schaal Email: sschaal@usc.edu Abstract We present a method for learning a general regrasping behavior by using supervised policy learning. First, we use reinforcement learning to learn linear regrasping policies, with a small number of parameters, for single objects. Next, a general high-dimensional regrasping policy is learned in a supervised manner by using the outputs of the individual policies. In our experiments with multiple objects, we show that learning low-dimensional policies makes the reinforcement learning feasible with a small amount of data. Our experiments indicate that the general high-dimensional policy learned using our method is able to outperform the respective linear policies on each of the single objects that they were trained on. Moreover, the general policy is able to generalize to a novel object that was not present during training. Keywords RegraspingPolicy searchGrasp stabilityReinforcement learning Y. Chebotar and K. Hausman contributed equally to this work. 1 Introduction Robust and stable grasping is one of the key requirements for successful robotic manipulation. Although, there has been a lot of progress in the area of grasping [1], the state-of-the-art approaches may still result in failures. Ideally, the robot would detect failures quickly enough to be able to correct them. In addition, the robot should be able to learn from its mistakes to avoid the failures in the future. To address these challenges, we propose using early grasp stability prediction during the initial phases of the grasp. We also present a regrasping behavior that corrects failed grasps based on this prediction and improves over time. In our previous work [2], we presented a first step towards an autonomous regrasping behavior using spatio-temporal tactile features and reinforcement learning. We were able to show that simple regrasping strategies can be learned using linear policies if enough data is provided. However, these strategies do not generalize well to other classes of objects than those they were trained on. The main reason for this shortcoming is that the policies are not representative enough to capture the richness of different shapes and physical properties of the objects. A potential solution to learn a more complex and generalizable regrasping strategy is to employ a more complex policy class and gather a lot of real-robot data with a variety of objects to learn the policy parameters. The main weakness of such a solution is that, in addition to requiring large amounts of data, these complex policies often result in the learner becoming stuck in poor local optima [3, 4]. In this paper, we propose learning a complex high-dimensional regrasping policy in a supervised fashion. Our method uses simple linear policies to guide the general policy to avoid poor local minima and to learn the general policy from smaller amounts of data. In related work [5], the authors tackle the regrasping problem by searching for the closest stable grasp in the database of all the previous grasps performed by the robot. A similar approach is presented by [6], where the authors propose an impedance-control-based grasp adaptation strategy that searches a database for a similar tactile experience in order to correct the grasp. In that case, the grasp adaptation is focused on in-hand adjustments rather than placing the object down and regrasping it. The idea of using supervised learning in policy search has been used in [7], where the authors use trajectory optimization to direct the policy learning process and apply the learned policies to various manipulation tasks. A similar approach was proposed in [8], where the authors use deep spatial autoencoders to learn the state representation and unify a set of linear Gaussian controllers to generalize for the unseen situations. In our work, we use the idea of unifying simple strategies to generate a complex generic policy. Here, however, we use simple linear policies learned through reinforcement learning rather than optimized trajectories as the examples that the general policy can learn from. 2 Technical Approach In this section, we describe all the steps of the hereby presented pipeline. First, we learn a grasp stability predictor for early detection of grasp failures based on spatio-temporal tactile features. In the second step, the grasp prediction is used to provide feedback for reinforcement learning of low-dimensional linear regrasping policies for single objects. Finally, in the last step, the individual policies are combined in a high-dimensional general policy through supervised learning. 2.1 Grasp Stability Prediction with Spatio-Temporal Tactile Features To describe a time series of tactile data, we employ spatio-temporal feature descriptors extracted using Spatio-Temporal Hierarchical Matching Pursuit (ST-HMP) that have been shown to have high performance in temporal tactile data classification tasks [9]. ST-HMP is based on Hierarchical Matching Pursuit (HMP), which is an unsupervised feature-extraction method used for images [10]. In ST-HMP, the tactile information is aggregated both in the spatial and the temporal domains. This is achieved by constructing a pyramid of spatio-temporal features at different coarseness levels, which provides invariance to spatial and temporal deviations of the tactile signal. In the spatial domain, the dictionary is learned and the sparse codes are extracted from small tactile image patches. To encode data as sparse codes, HMP learns a dictionary of codewords using the common codebook-learning method K-SVD [11]. Given a set of N H-dimensional observations (e.g. image patches) [$$Y = [y\_1,\dots ,y\_N] \in R^{H \times N}$$], HMP learns a M-word dictionary [$$D = [d\_1,\dots ,d\_M] \in R^{H \times M}$$] and the corresponding sparse codes [$$X = [x\_1,\dots ,x\_N] \in R^{M \times N}$$] that minimize the reconstruction error between the original and the encoded data: [$$ \min \_{D,X} \Vert Y - D X \Vert ^2\_F \,\,\, \text {s. t.} \,\,\, \forall m \, \Vert d\_m\Vert \_2 = 1 \,\,\ \text {and} \,\,\, \forall i \, \Vert x\_i\Vert \_0 \le K, $$] where [$$\Vert \cdot \Vert \_F$$] is a Frobenius norm, [$$x\_i$$] are the sparse vectors, [$$\Vert \cdot \Vert \_0$$] is a zero-norm that counts number of non-zero elements in a vector, and K is the sparsity level that limits the number of non-zero elements in the sparse codes. The resulting sparse codes are aggregated using spatial max-pooling. After computing the HMP features for all tactile images in the time series, pooling is performed on the temporal level by constructing a temporal pyramid. The tactile sequence is divided into sub-sequences of different lengths. For all sub-sequences, the algorithm performs max-pooling of the HMP features resulting in a single feature descriptor for each sub-sequence. Combined with spatial pooling, this results in a spatio-temporal pooling of the sparse codes. Finally, the features of all the spatio-temporal cells are concatenated to create a single feature vector [$$F\_P$$] for the complete tactile sequence: [$$F\_P = \left[ C\_{11},\ldots ,C\_{ST}\right] $$], where S is the number of spatial cells and T is the number of temporal cells. After extracting the ST-HMP feature descriptor from the tactile sequence, we use a linear Support Vector Machine (SVM) to learn a classifier for the grasp stability prediction [9]. Using multiple levels in the spatial and temporal pyramids of ST-HMP increases the dimensionality of tactile features substantially. When combined with learning regrasping behaviors for multiple objects, this approach leads to a large number of parameters to learn for the regrasping mapping function, which is usually a hard task for policy search algorithms [4]. Thus, in this work, we add several modifications to make this process feasible. In particular, we divide the learning process into two stages: (i) learning linear policies for individual objects and (ii) learning a high-dimensional policy to generalize between objects. 2.2 Learning Linear Regrasping Policies for Individual Objects Once a grasp is predicted to fail by the grasp stability predictor, the robot has to place the object down and regrasp it using the information acquired during the initial grasp. In order to achieve this goal, we learn a mapping from the tactile features of the initial grasp to the grasp adjustment, i.e. the change in position and orientation between the initial grasp and the regrasp. The parameters of this mapping function for individual objects are learned using reinforcement learning. We define the policy [$$\pi ({\varvec{\theta }})$$] as a Gaussian distribution over mapping parameters [$${\varvec{\theta }}$$] with a mean [$${\varvec{\mu }}$$] and a covariance matrix [$${\varvec{\varSigma }}$$]. To reduce the dimensionality of the input features, we perform principal component analysis (PCA) [12] on the ST-HMP descriptors and use only the largest principal components. The mapping function is a linear combination of these PCA features: [$$ \left( x, y, z, \alpha , \beta , \gamma \right) = \mathbf {W} {\varvec{\phi }} \quad \text{ with } \,\, \mathbf {W} \in \mathbb {R}^{6 \times n} \,\, \text{ and } \,\, {\varvec{\phi }} \in \mathbb {R}^{n}, $$] where [$$\mathbf {W} $$] contains the learned weights [$${\varvec{\theta }} = (w\_{x,1}, \ldots , w\_{x,n}, \ldots , w\_{\gamma ,n})$$] of the features [$${\varvec{\phi }}$$], and n is the number of principal components. The reward [$$R({\varvec{\theta }})$$] is computed by estimating the success of the adjusted grasp using the grasp stability predictor. For optimizing the linear policy for individual objects we use the relative entropy policy search (REPS) algorithm [13]. The main advantage of this method is that, in the process of reward maximization, the loss of information during a policy update is bounded, which leads to a better convergence behavior. The goal of REPS is to maximize the expected reward [$$J(\pi )$$] of the policy [$$\pi $$] subject to bounded information loss between the previous and updated policy. Information loss is defined as the Kullback-Leibler (KL) divergence between the two policies. Bounding the information loss limits the change of the policy and hence, avoids sampling too far from unexplored policy regions. Let [$$q({\varvec{\theta }})$$] be the old policy and [$$\pi ({\varvec{\theta }})$$] be the new policy after the policy update. We formulate a constrained optimization problem: [$$ \max \_{\pi } \,J(\pi ) = \int \pi ({\varvec{\theta }}) R({\varvec{\theta }})\,d{\varvec{\theta }} \quad \text {s. t.} \,\, \int \pi ({\varvec{\theta }}) \log \frac{\pi ({\varvec{\theta }})}{q({\varvec{\theta }})}\,d{\varvec{\theta }} \le \epsilon , $$] where, as mentioned before, [$$J(\pi )$$] is the total expected reward of using the policy [$$\pi ({\varvec{\theta }})$$]. The additional constraint bounds the KL-divergence between the policies with the maximum information lost set to [$$\epsilon $$]. The updated policy is proportional to the old policy: [$$ \pi ({\varvec{\theta }}) \propto q({\varvec{\theta }}) \exp \left( \frac{R({\varvec{\theta }})}{\eta }\right) . $$] Therefore, we are able to compute the new policy parameters with a weighted maximum-likelihood solution. The weights are equal to [$$\exp \left( R({\varvec{\theta }}) / \eta \right) $$], where the rewards are scaled by the parameter [$$\eta $$]. By decreasing [$$\eta $$] one gives larger weights to the high-reward samples. An increase of [$$\eta $$] results in more uniform weights. The parameter [$$\eta $$] is computed according to the optimization constraints by solving the dual problem. Given a set of policy parameters [$$\{{\varvec{\theta }}\_1,\ldots ,{\varvec{\theta }}\_N\}$$] and corresponding episode rewards, the policy update rules for [$${\varvec{\mu }}$$] and [$${\varvec{\varSigma }}$$] can be formulated as follows [4]: [$$ {\varvec{\mu }} = \frac{\sum \_{i=1}^N d\_i {\varvec{\theta }}\_i}{\sum \_{i=1}^N d\_i}, \,\,\, {\varvec{\varSigma }} = \frac{\sum \_{i=1}^N d\_i \left( {\varvec{\theta }}\_i - {\varvec{\mu }}\right) \left( {\varvec{\theta }}\_i - {\varvec{\mu }}\right) ^\top }{\sum \_{i=1}^N d\_i}\quad \text{ with } d\_i = \exp \left( R({\varvec{\theta }}) / \eta \right) . $$] 2.3 Learning a General Regrasping Policy After the individual linear policies have been learned, we train a larger high-dimensional policy in a supervised manner using the outputs of the individual policies. This is similar to the guided policy search approach proposed in [3]. In our case, the guidance of the general policy comes from the individual policies that can be efficiently learned for separate objects. As the general policy class we choose a neural network with a large number of parameters. Such a policy has enough representational richness to incorporate regrasping behavior for many different objects. However, learning its parameters directly requires a very large number of experiments, whereas supervised learning with already learned individual policies speeds up the process significantly. [] Fig. 1. Objects and experimental setup used for learning the grasp stability predictor and the regrasping behavior. If an object falls out of the hand it returns to its initial position due to the shape of the bowl. Top-left: the cylinder. Top-right: the box. Bottom-left: the ball. Bottom-right: the novel object. To generate training data for learning the general policy, we sample grasp corrections from the already learned individual policies using previously collected data. Input features and resulting grasp corrections are combined in a “transfer” dataset, which is used to transfer the behaviors to the general policy. In order to increase the amount of information provided to the general policy, we increase the number of its input features by extracting a larger number of PCA components from the ST-HMP features. Using different features in the general policy than in the original individual policies is one of the advantages of our setting. The individual policies provide outputs of the desired behavior, while the general policy can have a different set of input features. To train the neural network, we employ the mean-squared error loss function and the Levenberg-Marquardt optimization algorithm [14]. In the hidden layers, we use neurons with the hyperbolic tangent sigmoid transfer function: [$$ a(x) = \frac{2}{1+\exp (-2x)}-1. $$] For the activation of the output layer we use a linear transfer function, i.e. the output is a linear combination of the inputs from the previous layer. In order to avoid overfitting of the training data we employ the early stopping technique during training [15]. The data set is divided into mutually exclusive training, validation and test sets. While the network parameters are optimized on the training set, the training stops once the performance on the validation set starts decreasing. 3 Experimental Results 3.1 Evaluation of Grasp Stability Prediction In our experiments, we use a Barrett arm and hand that is equipped with three biomimetic tactile sensors (BioTacs) [16]. Each BioTac includes an array of 19 electrodes, whose impedances change depending on the local deformation of the robot’s flexible skin. For extracting ST-HMP features, the BioTac electrode values are laid out in a 2D tactile image according to their spatial arrangement on the sensor as depicted in Fig. 2 (top left). We use bowls (see Fig. 1) to bring the objects up right if they fall out of the gripper during the extensive shaking motions that are performed later in the experiment. This experimental setup enables us to fully automate the learning process and let the robot run for many hours to autonomously learn the grasp stability predictor. The experiment proceeds as follows. The robot reaches for the object to perform a randomly generated top grasp. The randomness stems from white noise added to the top grasp. Standard deviation of the noise is [$$\pm 10^\circ $$] in roll and pitch of the gripper, [$$\pm 60^\circ $$] in yaw, and [$$\pm 1\,\text {cm}$$] in all translational directions. These parameters are tuned such that there is always at least one finger touching the object. After approaching and grasping the object using the force grip controller [17], the robot lifts the object and performs extensive shaking motions in all directions to ensure that the grasp is stable. The shaking motions are performed by rapidly changing the end-effector’s orientation by [$$\pm 15^\circ $$] and position by [$$\pm 3\,\text {cm}$$] in all directions multiple times. If the object is still in the hand after the shaking motions, we consider it to be a successful grasp. The wrist-mounted force-torque sensor is used to determine if the object is still in the hand at the end of the experiment. The ST-HTMP features use a temporal window of 650 ms before and 650 ms after starting picking up the object. Our goal is to determine early in the lifting phase if the grasp is going to fail. In this manner, the robot can stop the motion early enough to avoid displacing the object, and hence, it can regrasp it later. We evaluate our approach on three objects: a cylindrical object, a box and a ball. We perform a 5-fold cross-validation on 500 grasp samples for each object. The robot achieves a grasp classification accuracy of [$$90.7\%$$] on the cylinder, [$$82.4\%$$] on the box and [$$86.4\%$$] on the ball. 3.2 Learning Individual Linear Regrasping Policies After learning the grasp stability predictor, we evaluate the regrasping algorithm for individual policies. The experimental setup for this scenario is similar to the one for the grasp stability predictor. The robot uses the stability prediction to self-supervise the learning process. In this manner, we are able to let the robot run for many hours for each object to autonomously learn the regrasping behavior. As described in Sect. 2.2, we apply PCA and extract five principal components from the ST-HMP features for learning individual policies. As a result, linear policies contain only 30 parameters (5 for each of the 6 grasp adjustment dimensions). This makes the policy search feasible using relatively small amounts of data. We evaluate the individual policies learned for the cylinder, box and ball objects. We perform multiple policy updates for each object until the policy converges. For each update, we collect 100 regrasping samples. First, we perform a randomly generated top grasp. If the grasp is predicted to fail, the algorithm samples the current regrasping policy and the robot performs up to three regrasps. If one of the regrasps is successful, the robot stops regrasping and performs the next random grasp. The rewards for the reinforcement learning are specified as follows. 0.0: The grasp is predicted unsuccessful by the grasp stability predictor. We do not perform any additional actions. 0.5: The grasp is predicted successful by the stability predictor. However, the object falls out of the hand after additional extensive shaking motions. 1.0: The grasp is predicted successful and the object is still in the hand after the shaking motions. [] Fig. 2. Top left: schematic of the electrode arrangements on the BioTac sensor and the corresponding tactile image used for the ST-HMP features. V1, V2 and V3 are computed by averaging the neighboring electrode values. Top right, bottom left, bottom right: reinforcement learning curves for regrasping individual objects using REPS. Policy updates are performed every 100 regrasps. Figure 2 shows the average reward values after each policy update for all the objects. The robot is able to improve its regrasping behavior significantly. To evaluate the results of the policy search, we perform 100 random grasps using the final policies on each of the objects that they were learned on. The robot has three attempts to regrasp each object using the learned policy. Table 1 shows the percentage of successful grasps on each object after each regrasp. Already after one regrasp, the robot is able to correct the majority of the failed grasps by increasing the success rate of the grasps from 41.8% to 83.5% on the cylinder, from 40.7% to 85.4% on the box and from 52.9% to 84.8% on the ball. Moreover, allowing additional regrasps increases this value to 90.3% for two and 97.1% for three regrasps on the cylinder, 93.7% and 96.8% on the box, and to 91.2% and 95.1% on the ball. These results indicate that the robot is able to learn a tactile-based regrasping strategy for individual objects. Table 1. Performance of the individual and combined regrasping policies. +------------+---------------------+-----------+------------+------------+-----------------+ | Object | Individual policies | | | | Combined policy | +:===========+:====================+:==========+:===========+:===========+:================+ | | No regrasps | 1 regrasp | 2 regrasps | 3 regrasps | | +------------+---------------------+-----------+------------+------------+-----------------+ | Cylinder | 41.8 | 83.5 | 90.3 | 97.1 | 92.3 | +------------+---------------------+-----------+------------+------------+-----------------+ | Box | 40.7 | 85.4 | 93.7 | 96.8 | 87.6 | +------------+---------------------+-----------+------------+------------+-----------------+ | Ball | 52.9 | 84.8 | 91.2 | 95.1 | 91.4 | +------------+---------------------+-----------+------------+------------+-----------------+ | New object | 40.1 | - | - | - | 80.7 | +------------+---------------------+-----------+------------+------------+-----------------+ 3.3 Evaluation of General Regrasping Policy After training individual policies we create a “transfer” dataset with grasp corrections obtained from the individual linear regrasping policies for all objects. For each set of tactile features, we query the respective previously-learned linear policy for the corresponding grasp correction. We take the input features for the individual policies from the failed grasps in the open-source¹ BiGS dataset [18]. The grasps in BiGS were collected in an analogous experimental setup and can directly be used for creating the “transfer” dataset. In total, the training set contains 3351 examples: 1380 for the cylinder, 1035 for the box and 936 for the ball. We use supervised learning with the obtained dataset to learn a combined policy that mimics the behavior of the individual policies. In this work, we employ a neural network to achieve this task. To find the optimal architecture of the neural network, we evaluated different networks with various depths and numbers of neurons to learn the nonlinear policy. The best performance is achieved by using 20 ST-HMP PCA features as inputs. We have not observed any improvement of the approximation accuracy when using more than one hidden layer. This indicates that the ST-HMP algorithm already extracts most distinctive features from the tactile data and we do not require additional deep network architecture for our task. The final neural network consists of one hidden layer of 20 hidden units with tangent sigmoid activation functions, 20 input features and 6 outputs for grasp position and orientation adjustments. The resulting number of parameters in the generalized policy is 546. Such a high-dimensional policy would be hard to learn by directly employing reinforcement learning. Our formulation as supervised learning, however, simplifies this problem and makes the learning process with relatively small amounts of data more feasible. Table 1 shows performance of the generalized policy on the single objects. Since the combined policy is deterministic, we only evaluate a single regrasp for each failed grasp. Interestingly, the combined policy is able to achieve better performance on each of the single objects than the respective linear policies learned specifically for these object after one regrasp. Furthermore, in cases of the cylinder and the ball, the performance of the generalized policy is better than the linear policies evaluated after two regrasps. This shows that the general policy generalizes well between the single policies. In addition, by utilizing the knowledge obtained from single policies, the generalized policy is able to perform better on the objects that the single policies were trained on. The performance of the generalized policy on the box object is slightly worse than on the two other objects. A notable difference in this case is the increased importance of the gripper yaw angle with respect to the grasp performance. The individual policy learned on the box learns to correct the grasp such that the robot aligns its fingers with the box sides while regrasping. However, this is not important for the cylinder and the ball objects due to their symmetric shapes. Therefore, the regrasping policy for the box could not benefit from the two other policies when adjusting grasp in the yaw direction. We test performance of the generalized policy on a novel, more complex object (see the bottom-right corner in Fig. 1), which was not present during learning. It is worth noting that the novel object combines features of the three objects that the policies were trained on. The generalized policy is able to improve the grasping performance significantly, which shows its ability to generalize to more complex objects. Nevertheless, there are some difficulties when the robot performs regrasp on a part of the object that is different from the initial grasp, such as switching from the round lid to the bottom part of the object, which is of a box form. In this case, the regrasp is incorrect for the new part of the object, i.e. the yaw adjustment is suboptimal for the box part of the object due to the round grasping surface (the lid) in the initial grasp. The reason is the lack of this data point in the previously encountered situations in the training dataset. During the experiments, we were able to observe many intuitive corrections made by the robot using the learned regrasping policy. The robot was able to identify if one of the fingers was only barely touching the object’s surface, causing the object to rotate in the hand. In this case, the regrasp resulted in either rotating or translating the gripper such that all of its fingers were firmly touching the object. Another noticeable trend learned through reinforcement learning was that the robot would regrasp the middle part of the object which was closer to the center of mass, hence, more stable for grasping. On the box object, the robot learned to change its grasp such that its fingers were aligned with the box’s sides. These results indicate that not only can the robot learn a set of linear regrasping policies for individual objects, but also that it can use them as the basis for guiding the generalized regrasping behavior. 4 Conclusions and Future Work In this work, we proposed a method that is able to learn complex high-dimensional policies by using examples from simple policies learned through reinforcement learning. In this manner, we were able to avoid requiring large amounts of data to learn complex policies. Instead, we employed supervised learning techniques to mimic various behaviors of simple policies. To show the effectiveness of our method, we applied it to the problem of regrasping using tactile features. In particular, we used early grasp stability prediction during the initial phases of the grasp and a regrasping behavior that corrects failed grasps based on this prediction and improves over time. Our experiments indicate that the combined policy learned using our method is able to achieve better performance on each of the single objects than the respective linear policies learned using reinforcement learning specifically for these objects after one regrasp. Moreover, the general policy achieves approximately 80% success rate after one regrasp on a novel object that was not present during training. These results show that our supervised policy learning method applied to regrasping can generalize to more complex objects. As a next step, we plan to use the supervised policy learning method to learn other, more complex manipulation tasks. We also hope to be able to extend the presented method with other sensor modalities such as vision. References 1. Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis a survey. IEEE Trans. Robot. 30(2), 289–309 (2014)CrossRef 2. Chebotar, Y., Hausman, K., Su, Z., Sukhatme, G.S., Stefan, S.: Self-supervised regrasping using spatio-temporal tactile features and reinforcement learning. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2016) 3. Levine, S., Koltun, V.: Guided policy search. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1–9 (2013) 4. Deisenroth, M.P., Neumann, G., Peters, J.: A survey on policy search for robotics. Found. Trends Robot. 2(1–2), 1–142 (2013) 5. Dang, H., Allen, P.K.: Stable grasping under pose uncertainty using tactile feedback. Auton. Robot. 36(4), 309–330 (2014)CrossRef 6. Li, M., Bekiroglu, Y., Kragic, D., Billard, A.: Learning of grasp adaptation through experience and tactile sensing. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 3339–3346. IEEE (2014) 7. Levine, S., Wagener, N., Abbeel, P.: Learning contact-rich manipulation skills with guided policy search. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 156–163. IEEE (2015) 8. Finn, C., Tan, X.Y., Duan, Y., Darrell, T., Levine, S., Abbeel, P.: Deep spatial autoencoders for visuomotor learning. CoRR 117(117), 240 (2015) 9. Madry, M., Bo, L., Kragic, D., Fox, D.: St-hmp: unsupervised spatio-temporal feature learning for tactile data. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 2262–2269, May 2014 10. Bo, L., Ren, X., Fox, D.: Hierarchical matching pursuit for image classification: architecture and fast algorithms. In: NIPS, pp. 2115–2123 (2011) 11. Aharon, M., Elad, M., Bruckstein, A.: k-svd: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 54(11), 4311–4322 (2006)CrossRef 12. Jolliffe, I.T.: Principal Component Analysis. Springer, New York (1986)CrossRefMATH 13. Peters, J., Mülling, K., Altun, Y.: Relative entropy policy search. In: AAAI. AAAI Press (2010) 14. Hagan, M.T., Menhaj, M.B.: Training feedforward networks with the marquardt algorithm. IEEE Trans. Neural Netw. 5(6), 989–993 (1994)CrossRef 15. Yao, Y., Rosasco, L., Caponnetto, A.: On early stopping in gradient descent learning. Constr. Approximation 26(2), 289–315 (2007)MathSciNetCrossRefMATH 16. Wettels, N., Santos, V.J., Johansson, R.S., Loeb, G.E.: Biomimetic tactile sensor array. Adv. Robot. 22(8), 829–849 (2008)CrossRef 17. Su, Z., Hausman, K., Chebotar, Y., Molchanov, A., Loeb, G.E., Sukhatme, G.S., Schaal, S.: Force estimation and slip detection/classification for grip control using a biomimetic tactile sensor. In: 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pp. 297–303 (2015) 18. Chebotar, Y., Hausman, K., Su, Z., Molchanov, A., Kroemer, O., Sukhatme, G., Schaal, S.: Bigs: biotac grasp stability dataset. In: ICRA 2016 Workshop on Grasping and Manipulation Datasets (2016) Footnotes 1 http://​bigs.​robotics.​usc.​edu/​.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_55 Experimental Validation of Contact Dynamics for In-Hand Manipulation Roman Kolbert²  , Nikhil Chavan-Dafle¹   and Alberto Rodriguez¹   (1) Department of Mechanical Engineering, Massachhusetts Institute of Technology, Cambridge, USA (2) Robotics and Biology Laboratory, Technische Universität Berlin, Berlin, Germany     Roman Kolbert (Corresponding author) Email: romankolbert@hotmail.com   Nikhil Chavan-Dafle Email: nikhilcd@mit.edu   Alberto Rodriguez Email: albertor@mit.edu Abstract This paper evaluates state-of-the-art contact models at predicting the motions and forces involved in simple in-hand robotic manipulations. In particular it focuses on three primitive actions—linear sliding, pivoting, and rolling—that involve contacts between a gripper, a rigid object, and their environment. The evaluation is done through thousands of controlled experiments designed to capture the motion of object and gripper, and all contact forces and torques at 250 Hz. We demonstrate that a contact modeling approach based on Coulomb’s friction law and maximum energy principle is effective at reasoning about interaction to first order, but limited for making accurate predictions. We attribute the major limitations to (1) the non-uniqueness of force resolution inherent to grasps with multiple hard contacts of complex geometries, (2) unmodeled dynamics due to contact compliance, and (3) unmodeled geometries due to manufacturing defects. 1 Introduction Advances in computer vision and touch sensing over the last few decades have facilitated the effective perception of robots and their environments. Robots can know where they are with respect to other objects and surfaces, both in contact and at a distance, which opens up opportunities to plan and control their interaction. This paper is concerned with the experimental evaluation of state-of-the-art contact resolution techniques in robotics that explain that interaction in terms of predicted motions and forces. The application of choice is prehensile pushing—a form of in-hand manipulation that relies on the environment, acting as an external finger [1], to manipulate a grasped object. Prehensile pushing is a complex manipulation problem where the geometries/friction/motions of the gripper, the grasped object, and the environment all play an important role in determining the resultant motion of the object. In recent work Chavan-Dafle and Rodriguez [2] described an algorithm to model their interaction based on classical complementarity conditions for frictional point contacts [3], and a decomposition of complex contact geometries into rigid networks of point contacts. Under the assumption of rigid geometries, the algorithm predicts the motion of the manipulated object along with all acting forces and torques, as it is pushed against the environment. The main focus of this paper is to evaluate through careful experimentation the predictions by the proposed hard-contact model and by an equivalent state-of-the-art soft-contact model, in particular the physics engine MuJoCo [4]. We are specially interested in determining regions where the models produce acceptable predictions, and better understanding augmentation needs for a more realistic outcome. The experiments, described in Sect. 4, are conducted with an accurate industrial robotic arm fitted with a parallel jaw gripper, a Vicon tracking system to capture the pose of the object, and 6 axis force-torque sensors behind all contacts to provide calibrated ground truth. We evaluate the ability of the algorithms to predict motions and force for three primitive in-hand manipulation actions: linear sliding, pivoting, and rolling. We observe that the models, after careful tuning, can explain to first order the overall behavior of these actions. However, limitations easily show up due to the uncontrolled variability in the execution of the actions, due to the high sensitivity of the problem to small defects in object geometries and manufacturing features, and the unmodeled or difficult-to-calibrate effects in the rigidity/compliance of contacts. 2 Related Work Frictional contact has been rigorously studied for many years. Research from diverse fields of study have lead to fundamental theories and empirical models to explain the mechanics of friction [5–11]. Starting with Hertzian contact theory [5] for linear elastic contacts, all the way to recent nonlinear contact models of soft contacts [10, 12], have been used to produce simulations of the local interaction between bodies, in terms of motions, forces, and deformations. For computation reasons, the robotics community has traditionally preferred simple Coulomb point-contact models to explain physical interaction. A point contact model, with infinitesimal contact surface, offers only frictional forces within the tangential contact plane. A contact with finite geometry, modelling a softer point contact, provides also frictional torque about the contact normal. The relationship between the available linear and torsional friction force at a contact with finite geometry has been source of many works in robotics. Howe and Cutcosky [13] and Xydas and Kao [14] provide experimental validation of different soft contact models and approximations. These models, although proven useful for relatively hard materials, are computationally expensive; and require difficult-to-calibrate parameters or hard-to-satisfy assumptions of the distribution of pressure across the contact surface. Hence, the simple point contact model, though mostly unrealistic, is a prevalent model in robotics. In the last few decades, the robotic manipulation community has developed a large body of work based on simple point contact models. From seminal work on dexterous manipulation [15, 16] and rigid body simulations [3, 17], to more recent work on trajectory optimization [18], control [19], state estimation [20], or system identification [21] of systems undergoing frictional contact interactions. Unfortunately, little attention has been given to validating the assumptions the models build on, and the realism of their predictions. In previous work [22] we studied the validity of common assumptions in planar pushing interactions. This paper contributes with an experimental study of two models for contact resolution in the context of in-hand manipulation, and with a large dataset of carefully designed and recorded experiments. 3 Prehensile Pushing Our long term vision is to produce fast and reliable physical interaction between a robot and its environment. In particular we are interested in enabling prehensile pushing [2] as a general approach to manipulation of grasped objects. Prehensile pushing addresses in-hand manipulation as a sequence of simple robust pushes that control the grasp on an object by exploiting contacts with the environment and accurate arm motions. Figure 1 shows examples of prehensile pushes with different contact geometries and interactions. [] Fig. 1. Examples of prehensile pushes: rotating about an axis, pushing against a plane, and pivoting about an edge. (Figure from [2]) A reliable dynamic model is a fundamental building block to plan such motions, either for search-based or optimization-based planning methods. It is crucial to that end to understand under what conditions their assumptions are reasonable, and to what degree we can trust the predictions those models generate. In [2] we model prehensile pushing as a dynamic system composed of a rigid object in contact with the fingertips of the gripper, and the environment. In our formulation, the environment/pusher, acts as an extra finger that moves along a given trajectory and forces the object into a different grasp. Since the motion of the environment/pusher is the reflection of the motion of a dexterous robot arm, prehensile pushing can potentially give robots unprecedented levels of dexterity. We model the dynamics as a set of contact forces applied on the object through possibly complex planar contacts (e.g., line, patch,...), which we model as arrays of rigidly connected point contacts. The resultant motion of the object, and the forces at all contacts, are predicted in a time-stepping fashion as a consequence of Coulomb friction, the principle of maximum energy dissipation, non-penetration, and the motion of the pusher/environment. In this paper, we focus on the experimental validation of three prehensile pushing primitives–linear pushing, pivoting and rolling–executed under varying experimental conditions, including griping force, pushing velocity and pushing direction. We expect the model to be effective at reasoning about the interactions to first order. In practice, we see that prehensile pushing with multiple contacts with complex geometries is sufficient to expose the limitations of state-of-the-art contact modeling techniques. These manifest in the form of non-unique valid solutions and inaccurate predictions due to unmodeled compliance and manufacturing defects in the geometry and friction of the contact surfaces. [] Fig. 2. Experimental setup with 6 axis force-torque sensors fitted to both fingertips of the gripper and in the environment. In this case, a 3D printed Vicon marker is attached to the left end of the object for accurate and fast position tracking. The setup allows us to capture ground truth for motions and contact forces at 250 Hz. 4 Experimental Validation 4.1 Experimental Setup Setup. Figure 2 shows the setup we use to capture the dynamics in the contact experiments proposed in Sect. 3. It is instrumented to capture the 6DOF pose of the object, and the 6DOF forces and torques at all contacts at 250 Hz. We use an accurate ABB IRB120 robotic arm and a force-controlled parallel-jaw gripper to grasp the objects and push them against the environment. We use a Vicon motion tracking system with Bonita cameras to capture the pose of the object. The fingertips of the two gripper fingers are instrumented with ATI Nano17 F/T sensors and the environment with an ATI Gamma F/T sensor. For improved accuracy, we run a basic calibration routine for all F/T sensors, to eliminate the intrinsic offsets if any. Objects. For the experiments in this work, we use the five objects in Table 1, adding variability in materials, geometry, and weight. Table 1. Objects used in experiments. Physical properties. +-------+--------------------------+---------------+-------------+-----------+----------+ |   | Shape | Material | Length (mm) | Side (mm) | Mass (g) | +:======+:=========================+:==============+:============+:==========+:=========+ | obj1 | cylinder | aluminum 6061 | 100 | 25 | 158 | +-------+--------------------------+---------------+-------------+-----------+----------+ | obj2 | cylinder | ABS | 100 | 25 | 72.5 | +-------+--------------------------+---------------+-------------+-----------+----------+ | obj3 | cylinder with flat faces | aluminum 6061 | 100 | 25 | 145 | +-------+--------------------------+---------------+-------------+-----------+----------+ | obj4 | square prism | aluminum 6061 | 100 | 25 | 200.5 | +-------+--------------------------+---------------+-------------+-----------+----------+ | obj5 | square prism | ABS | 100 | 25 | 93.8 | +-------+--------------------------+---------------+-------------+-----------+----------+ Contacts. We designed exchangeable fingertips of different contact geometry (point, line, and circular) to be attached to the F/T sensors in the fingers and environment. These are 3D printed in a hard plastic material and covered with a thin layer of hard rubber which provided a good compromise between hardness, high friction and abrasion resistance. We identify the coefficient of friction at contacts with a small amount of experimental data, such that simulations and experiments yield similar results. The coefficients for the different contacts used in the experiments are in the range [0.35, 0.5] at fingers and [0.2, 0.3] at the pusher/environment. The rest of the data is used for validation. 4.2 Experimental Results Linear Pushing. In this experiment, a prismatic shaped object is pushed against a flat face of the force-torque sensor along a straight line (top of Fig. 3). We chose the fingertip contacts to be circular flat contacts of diameter 20 mm. We run multiple straight pushes by changing the gripping force [$$\in \ \{20,22,25,27,30,32,35\}$$] N, the pushing velocity [$$\in \ \{10,15,20,25\}$$] mm/s and the slope of the straight push with respect to the horizontal [$$\in \ \{-20,-10,0,10,20\}$$]. We performed three runs for each combination collectively yielding 420 each for objects obj3, obj4, and obj5. The complete raw data and helper files to parse it are available in the online repository [23] [] Fig. 3. Example of a linear push with gripping force of 20 N, pushing velocity of 10 mm/s and 0 [$$^\circ $$] slope to the horizontal. Note the motion of the object in the −Z direction, as the gripper moves straight in −X direction. The plots show the experimental data captured for motions on the left and force on the right (top), and simulated data with the hard-contact model in [2] (mid) and with the soft-contact model in MuJoCo [4] (bottom). Nominally, the pushing action starts at 0 s and ends at 1.5 s. Figure 3 shows results for one of those experiments and compares with the predictions by the two models of choice. The hard-contact model correctly predicts the motion of the object along the pushing direction (X) with no motion perpendicular to the flat faces of the fingertips (Y). It also predicts the downward sliding motion (Z) of about 1 mm, due to gravity pulling on the object. The experimental data shows force peaks at the fingertips and pusher right at the beginning of the motion. These peaks can be attributed to an impact phase, and to the known differences between kinetic and static friction, which are not considered in any of the two models. Note also that the predicted pushing force of 6 N is very close to the force observed in experiments, 6.2 N. In the series of experiments we also observe that increasing the gripping force, leading to a higher pushing force required to move the object and to a diminished falling motion along axis Z. Above 25 N of gripping force, the object barely slides down, a behavior correctly predicted by the hard contact model. Changes in the pushing velocity showed no significant change either in the forces at contacts or in the object motion. [] Fig. 4. Example of pivoting with grasping force of 20 N, pushing velocity of [$$10 ^\circ $$]/s and no pusher offset. Experimental data (top) Simulation data (mid) and MuJoCo data (bottom) for object orientation and contact torques at the contacts about the axis of pivoting for one of the pivoting experiments Pivoting. In this experiment the griper holds a prismatic object and pivots it against a line contact. Contacts at fingertips are again flat contacts of diameter 20 mm and that with the external feature is a line contact of length 28 mm. We conducted multiple experiments by changing the grasping force [$$\in \{20,22,25,27,30,32,35\}$$] N, the angular velocity of pivoting about fingers [$$\in \{2.5,5,10,15,20\}^{\circ }$$]/s and offset of the pusher contact location from the center of the object [$$\in \{0,5,10,15,20,25\}$$] mm. We performed three runs for each combination of the parameters collectively yielding 630 each for objects obj4 and obj5. Figure 4 shows an example of a pivoting experiment, as well as the results of object orientation and torques as observed experimentally and predicted by the two models of choice. Experiments and the two models agree that the object will remain virtually stationary while pivoting against the external line. The hard contact model also gives a good prediction of the torque experienced at the fingertips and pusher about the axis of pivoting. In general, we find agreement to first order between the analytical predictions from the hard model and experimental observations. The pushing force and torque experienced by the pusher increase as the gripping force increase or/and when the velocity of the push increases. The hard model, and to some degree the soft contact model, capture these trends. For the hard contact model, the predicted torques differ from the experimental values by as much as 0.005 Nm at the fingers and 0.01 Nm at the pusher. Note that the sensor uncertainty is 0.003 Nm at the fingers and 0.037 Nm at the pusher. We also observe a consistent slight decrease in the torque at the pusher as the object rotates, phenomenon which is not predicted by the hard-contact model, so could be due to unmodeled contact compliance. Rolling. In this experiment the gripper forces a cylindrical object to roll on a flat platform coated with silicone rubber of high coefficient of friction. In our setup, the grasping force required to successfully roll the object while retaining the grasp on the object is predicted by the hard-contact model to be below 5 N. It was challenging to reliably perform these experiments with our gripper which is designed to be operated above 5 N force. In this experiment, we do not have a force-torque sensor attached to the external contact. We conduct multiple experiments by changing the grasping force [$$\in \ \{3,4,5,6,7,8,9,10\}$$] N and robot velocity [$$\in \ \{5,10,15,20,25\}$$] mm/s, and performed three runs of each configuration, for a total of 240 experiments for objects obj1 and obj2. [] Fig. 5. Example of rolling with grasping force of 3 N and pushing velocity of 10 mm/s. Three instants from left to right show the beginning of the push, intermediate position and end of the push. Figure 5 shows the object orientation as seen from the gripper and the forces at the fingers along X and Z directions, when the object is rolled with a gripping force of 3 N. The hard-contact model predicts that, while the object is pushed in Y direction, the forces experienced at the fingers are balanced - while the finger pushing the object experiences an extra force, equal to half of the pushing force exerted by the platform, the finger in front experiences less force by the same amount. We see a similar behaviour in experimental data; but with a worse fitting than that with the previous experiments. This is likely due to poor force control of the gripper, given it is near its operating limit yielding momentary slack and tightening of the grasp as the object rolls. An important caveat here is that the simulations are consistent with experimental data only if the coefficient of friction of the silicone surface is assumed to be very high (close to 2.5). The unavailability of ground truth forces at the external contact made further analysis difficult. A further investigation of this behaviour would be required to discern if it is due to sticking effect of silicone, unstable gripper control, or other unmodelled effects, such as compliance. 5 Observations and Discussion The experiments show that the studied models provide useful predictions to first degree. If not to make very accurate predictions, at least they provide accurate trends. We conclude by discussing important miss givings of the models, and directions for future work. [] Fig. 6. Three runs of linear pushing with same parameters and initial conditions. Sum of forces along x (top left) forces along x at pusher contact (top right) forces along x at finger 1 (bottom left) forces along x at finger 2 (bottom right) 5.1 Variability in the Experimental Data Figure 6 shows three runs of an identical linear pushing experiment. They are similar, but clearly not identical. This variability is important, and should be further quantified. For reference, the plot on the top left shows the sum of forces in the X direction for each of those three runs. Newton’s second law says that the sum should be close to zero when the object is not moving or if the push is slow. Note that even before the push, the forces sum up to approximately [$$-1.5$$] N for all three runs. The two sensors in the fingers have an uncertainty of 0.25 N and the sensor in the environment an uncertainty of 0.75 N along the X axis. These values, together with possible small miss-alignments at contacts could justify the experimental error. Within the first second the plot shows a static noise of the sensors of approximately 0.02 N. The noise increases then between time 1 s and 2 s, which is the period of the push. A transition between sticking and sliding takes place. This effect was experienced specially for small velocities (5 mm/s) and high gripping forces (30 N). 5.2 Multiple Valid Solutions The sensor data shows a variability that cannot be just attributed to sensor noise (0.02 N) or uncertainty (0.25 N). In Fig. 6 the forces differ at some points by almost 2 N. These cases are very well correlated with the experiments where we observed discrepancies between the local contact forces and torques predicted by our simulator and those observed experimentally. However, as discussed in the previous section, the net wrench generated on the object as a result of these local forces and torques was found to be sufficiently close for the three runs, pointing to the possibility of high sensitivity of local forces. In general, we found that, there is no simple deterministic mapping between motions and forces, with forces varying by as much as 20%. This effect is specially related to hard surface contacts and the sensitivity to the pressure distribution, and it limits the ability of models to make accurate predictions of motions and forces. 5.3 Hard Vs. Soft Contacts In this paper we compared our proposed hard-contact model with MuJoCo [4], a fast physics engine designed for modelbased optimization. The plots for Mujoco in Fig. 3 show that the object starts sliding down even before making contact with the external pusher. In fact, we found it was very difficult to set up the simulation environment so that did not happen. The soft constraints in MuJoCo make the solver very fast and useful for many applications, but also limit the prediction accuracy, especially when rigid contacts and transitions in sticking and slipping at contacts are involved. To make the pusher motion control stable, we had to introduce damping which leads to the slower increase of contact forces. Equally, as seen in Fig. 4 for a pivoting experiment, the object motion predictions proved less realistic in MuJoCo (rotation of about 0.6 vs 1 radian). The soft contact model does not force the object to maintain line contact, which perturbs the natural stability of the pivoting action against a hard contact. The torques at the pusher contact differ by 0.02 N to 0.03 N from the experimental data. In contrast with the hard-model, it predicts a decreasing torque at pusher contact, which the experimental data also shows. We hypothesize this is due to actual compliance in the real contact. The effects of the soft model can be reduced by increasing the stiffness that MuJoCo uses to impose the contact constraints. However, above a certain threshold, the system becomes unstable and noisy. Spending more time with the simulator, and better understanding how to tune it, could potentially yield better results. References 1. Chavan-Dafle, N., et al.: Extrinsic dexterity: in-hand manipulation with external forces. In: IEEE ICRA, pp. 1578–1585 (2014) 2. Chavan-Dafle, N., Rodriguez, A.: Prehensile pushing: in-hand manipulation with push-primitives. In: IEEE/RSJ IROS, pp. 6215–6222 (2015) 3. Stewart, D.E., Trinkle, J.C.: An implicit time-stepping scheme for rigid body dynamics with inelastic collisions and coulomb friction. Int. J. Numer. Methods Eng. 39(15), 2673–2691 (1996)MathSciNetCrossRefMATH 4. Todorov, E.: Convex and analytically-invertible dynamics with contacts and constraints: theory and impl. in MuJoCo. In: IEEE ICRA, pp. 6054–6061 (2014) 5. Hertz, H.: On the Contact of Rigid Elastic Solids and on Hardness, Chap. 6. Macmillan, New York (1882) 6. Bailey, J.A.: Friction in metal machining-Mechanical aspects. Wear 31(2), 243–275 (1975)CrossRef 7. Ruina, A.: Slip instability and state variable friction laws. J. Geophys. Res. Solid Earth 88(B12), 2156–2202 (1983)CrossRef 8. Oden, J., Martins, J.: Models and computational methods for dynamic friction phenomena. Comput. Methods App. Mech. Eng. 52, 527–634 (1985)MathSciNetCrossRefMATH 9. Han, H., Shimada, A., Kawamura, S.: Analysis of friction on human fingers and design of artificial fingers. In: IEEE ICRA, pp. 3061–3066 (1996) 10. Urbakh, M., Klafter, J., Gourdon, D., Israelachvili, J.: The nonlinear nature of friction. Nature 430(6999), 525–528 (2004)CrossRef 11. Autumn, K., Dittmore, A., Santos, D., Spenko, M., Cutkosky, M.: Frictional adhesion: a new angle on gecko attachment. J. Exp. Biol. 209, 3569–3579 (2006)CrossRef 12. Ho, V.A., Wang, Z., Hirai, S.: Beam bundle model of human-like fingertip for investigation of tactile mechanism. In: IEEE/RSJ IROS, pp. 4491–4498 (2013) 13. Howe, R.D., Cutkosky, M.R.: Practical force-motion models for sliding manipulation. Int. J. Rob. Res. 15(6), 557–572 (1996)CrossRef 14. Xydas, N., Kao, I.: Modeling of contact mechanics and friction limit surfaces for soft fingers in robotics, with experimental results. Int. J. Rob. Res. 18(9), 941–950 (1999)CrossRef 15. Salisbury, J.K., Craig, J.J.: Articulated hands: force control and kinematic issues. Int. J. Rob. Res. 1(1), 4–17 (1982)CrossRef 16. Fearing, R.: Simplified grasping and manipulation with dextrous robot hands. IEEE J. Rob. Autom. 2(4), 188–195 (1986)CrossRef 17. Cherif, M., Gupta, K.K.: Planning quasi-static fingertip manipulations for reconfiguring objects. IEEE Trans. Rob. Autom. 15, 837–848 (1999)CrossRef 18. Posa, M., Cantu, C., Tedrake, R.: A direct method for trajectory optimization of rigid bodies through contact. Int. J. Rob. Res. 33(1), 69–81 (2014)CrossRef 19. Tassa, Y., Erez, T., Todorov, E.: Synthesis and stabilization of complex behaviors through online trajectory optimization. In: IEEE/RSJ IROS, pp. 4906–4913 (2012) 20. Yu, K.-T., Leonard, J., Rodriguez, A.: Shape and Pose Recovery from Planar Pushing. In: IEEE/RSJ IROS, pp. 1208–1215 (2015) 21. Fazeli, N., Tedrake, R., Rodriguez, A.: Identifiability analysis of planar rigid-body frictional contact. In: ISRR (2015) 22. Yu, K-T., Bauza, M., Fazeli, N., Rodriguez, A.: More than a million ways to be pushed. A high-fidelity experimental data set of planar pushing. In: IEEE/RSJ IROS (2016) 23. Data set of prehensile push actions. https://​mcube.​mit.​edu/​prepush-dataset © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_56 Iterative Visual Recognition for Learning Based Randomized Bin-Picking Kensuke Harada^(1, 2  ), Weiwei Wan¹, Tokuo Tsuji^(1, 3), Kohei Kikuchi⁴, Kazuyuki Nagata¹ and Hiromu Onda¹ (1) National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan (2) Osaka University, Toyonaka, Japan (3) Kanazawa University, Kanazawa, Japan (4) Toyota Motors Co. Ltd., Toyota, Japan     Kensuke Harada Email: harada@sys.es.osaka-u.ac.jp Abstract This paper proposes a iterative visual recognition system for learning based randomized bin-picking. Since the configuration on randomly stacked objects while executing the current picking trial is just partially different from the configuration while executing the previous picking trial, we consider detecting the poses of objects just by using a part of visual image taken at the current picking trial where it is different from the visual image taken at the previous picking trial. By using this method, we do not need to try to detect the poses of all objects included in the pile at every picking trial. Assuming the 3D vision sensor attached at the wrist of a manipulator, we first explain a method to determine the pose of a 3D vision sensor maximizing the visibility of randomly stacked objects. Then, we explain a method for detecting the poses of randomly stacked objects. Effectiveness of our proposed approach is confirmed by experiments using a dual-arm manipulator where a 3D vision sensor and the two-fingered hand attached at the right and the left wrists, respectively. Keywords Bin-pickingGraspingMotion planningVisual recognitionIndustrial robot 1 Introduction Randomized bin-picking refers to the problem of automatically picking an object that is randomly stored in a box. If randomized bin-picking is introduced to a production process, we do not need any parts-feeding machines or human workers to once arrange the objects to be picked by a robot. Although a number of researches have been done on randomized bin-picking [1–4], randomized bin-picking is still difficult and is not widely introduced to production processes. Since one of the main reasons is its low success rate of the pick, we have proposed a learning based approach which can automatically increase the success rate [5]. Figure 1 illustrates the randomized bin-picking where we use a dual-arm manipulator with a vision sensor (3D depth sensor) and two-fingered grippers both attached at the wrist. We first detect the poses of randomly stacked objects by using the visual information obtained from the 3D vision sensor attached at the wrist. Once the objects’ poses are obtained, we consider predicting whether the robot can successfully pick one of the objects from the pile. If it is predicted that the robot successfully picks an object, the robot tries to pick an object. In our approach, the success rate is expected to increase if the number of detected object increases. Here, in the conventional research on randomized bin-picking, we have tried to detect the poses of all objects at every picking experiment in spite of the fact that the configuration of object while executing the current picking trial is almost same as that while executing the previous picking trial. The configuration on objects while executing the current picking trial is usually just partially different from that while executing the previous picking trial since a finger usually contacts just a few objects during the previous picking trial. To cope with this problem, we propose a new method for object pose detection for the randomized bin-picking. In our proposed method, we consider detecting the objects’ poses at a portion of the pile where its visual information is different from the visual information obtained during the previous picking experiment. [] Fig. 1. Overview of our bin-picking system In our proposed method, we first obtain the pose of 3D vision sensor attached at the wrist to capture the point cloud on the randomly stacked pile realizing its maximum visibility. Here, based on the occupancy grid map, the maximum visibility of the pile is realized by merging the point cloud captured during the current picking trial with the point cloud captured during the previous picking trial. Then, we show a method for detecting the poses of objects. We consider comparing each segment of the point cloud captured during current picking trial with that captured during the previously picking trial. If the difference is small, we do not estimate the poses of objects and can save the time needed for the estimation. 2 Learning Based Bin-Picking Overview We first briefly explain the leaning based bin-picking proposed previously [5]. As shown in Fig. 1, let us consider the case in which the same objects are randomly stored in a box. To pick an object from the pile, a 3D vision sensor (e.g., Xtion PRO) first captures a point cloud of randomly stacked objects. Then, we try to estimate the poses of randomly stacked objects. Then, we try to pick one of the objects which poses were detected. First among multiple candidates of grasping postures, we solve IK to check the reachability of the robot. Then, for each reachable grasping posture, a discriminator trained through a number of picking trials estimates whether or not the robot can successfully pick an object. Here, the estimation is performed based on the distribution of point cloud included in the swept volume of finger motion as shown in Fig. 2. If there are multiple grasping postures which are estimated to successfully pick an object, we consider selecting a grasping posture from multiple candidates according to the value of an index function. Then, the robot actually picks an object according to the selected grasping posture. [] Fig. 2. Finger swept volume 3 Sensor Pose Calculation We assume that the manipulator has at least 6 DOF such that the wrist can make an arbitrary pose within its movable range. The pose of the 3D vision sensor is determined to maximize the visibility of randomly stacked objects so as to precisely estimate the poses of randomly stacked objects. As shown in Fig. 3, let us assume a n-faced regular polyhedron sharing its geometrical center with the geometrical center of box’s bottom surface. Let us also assume a line orthogonally intersecting a face of the polyhedron and passing through the geometrical center. Let us consider a point along the line where the distance measured from the geometrical center is l. We make a 3D vision sensor locating at this point and facing the geometrical center. By discretizing a position of a 3D vision sensor along the line as [$$l=l\_1, l\_2, \cdots , l\_m$$], we can totally assume [$$m \cdot n$$] candidates of a 3D sensor’s pose. We consider imposing the following conditions for each candidate: - The 3D vision sensor is located above the box’s bottom surface. - IK (inverse kinematics) of the arm where the 3D vision sensor is attached at its wrist is solvable. - For a pose of the 3D vision sensor where IK is solvable, no collision occurs among the links and between a link and the environment. Among a set of 3D sensor’s pose satisfying the above conditions, we consider selecting one maximizing the visibility of randomly stacked objects. Here, robotic bin-picking is usually iterated until there is no object remained in a box. For the first picking trial, we consider selecting a 3D sensor’s pose minimizing the occluded area of the box’s bottom surface as shown in Fig. 4(a). After the second picking trial, we consider using previous result of measurement to determine the pose of a 3D sensor as shown in Fig. 4(b) and (c). We consider partitioning the storage area into multiple grid cells [6, 7]. By using the point cloud captured in the previous picking experiment, we mark occupied to the grid cells including the point cloud. We also mark occluded to the grid cells occluded by the grid cells marked as occupied. Pose of a 3D sensor is determined to maximize the number of grid cells marked as occluded to be visible. Here, through the previous picking experiment, configuration of stacked objects may change since the manipulator contacts the objects. However, the method explained in this subsection does not consider the change of configuration. Our method approximates the optimum pose of the 3D sensor by assuming the change of configuration is small. [] Fig. 3. Regular polygon assumed at the geometrical center of bottom surface [] Fig. 4. Determination of camera pose maximizing the visibility of stacked objects 4 Object Pose Detection This section explains a method for detecting the pose of randomly stacked objects. For the first picking trial, we consider detecting the poses of as many objects as possible. After the second picking trial, we consider detecting the poses of objects which poses are changed. 4.1 Object Pose Detection for the First Picking Trial To pick an object from the pile, the 3D vision sensor first captures a point cloud of randomly stored objects. Then, we segment the captured point cloud as shown in Fig. 5(a). In this research, we used a segmentation method based on the Euclidian cluster prepared in the PCL (Point Cloud Library) [9]. For each segment of point cloud which bounding-box size is similar to the bounding-box size of an object, we try to estimate the pose of an object using a two-step algorithm: first roughly detecting the pose by using the CVFH (Clustered Viewpoint Feature Histogram) [8] and the CRH (Camera Roll Histogram) estimation, and then detecting the precise pose by using the ICP (Iterative Closest Point) estimation method. In a preprocessing process before starting the detection, we prepared 42 partial view of the object model, and precompute the CVFH and CRH features of each view. During the detection, we extract the plenary surface from the point cloud, segment the remaining points cloud, and compute the CVFH and CRH features of each segmentation. Then, we match the precomputed features with the features of each segment and estimate the orientation of the segmentations. The matched segments are further refined using ICP estimation method to ensure good matching. The segmentation that has highest ICP matches and smallest outlier points will be used as the output. For the first picking trial, we usually detect the poses of a number of objects. In such cases, since we have to solve ICP estimation for a number of times, we consider using multiple threads and solving multiple ICP estimation in parallel. [] Fig. 5. Segmentation of point cloud after the second picking trial 4.2 Object Pose Detection After the Second Picking Trial After the second picking trial, we consider using the current point cloud together with the previously captured one. If a part of the previously captured point cloud is similar to the current one, we do not need to calculate the object’s pose belonging to the part of point cloud and can save the time needed to calculate the objects’ poses. In a picking task, after a 3D sensor capture a point cloud of randomly stacked objects, a robot manipulator tries to pick an object from the pile. The configuration of objects after a robot manipulator tries to pick an object is usually partially different from the configuration before the picking trial. If the previously captured point cloud is partially similar to the current point cloud, we consider merging the part of previously captured point cloud to the current one. By merging a part of the previous point cloud to the current one, the occluded area of the point cloud is expected to be smaller. The algorithm of merging the point cloud is outlined in Fig. 5 and Algorithm 1. Let [$$\bar{P} = (\bar{p}\_1, \bar{p}\_2, \cdots , \bar{p}\_m)$$] and [$$P = (p\_1, p\_2, \cdots , p\_n)$$] be the previously captured point cloud and the current one, respectively. Let also [$$\bar{P}\_1, \bar{P}\_2, \cdots $$], and [$$\bar{P}\_s$$] be the segments of previous point cloud. The overview of the merging algorithm is explained in the following. Figure 5(a) shows the segmented point cloud obtained during the previous picking trial. On the other hand, Fig. 5(b) shows the current point cloud where the configuration of object is partially different from the previous one. As shown in Fig. 5(c), for each point included in the current point cloud, we search for the point included in the previous point cloud making the minimum distance between them (lines 6 and 7). We further find a segment of previous point cloud where the point making the minimum distance belongs to (line 8). For each segment of previous point cloud, we introduce two integer numbers [$$\mathrm{near}(i)$$] and far(i) expressing the number of points included in the segment [$$P\_i$$] where the minimum distance is smaller and larger, respectively, than the threshold MinDistance (lines 9 and 10). We determine whether or not we merge the segment [$$\bar{P}\_i$$] into the point cloud P depending on the ratio between [$$\mathrm{far}(i)$$] and [$$\mathrm{near}(i)$$]. [] We further segment the merged point cloud. For each segment of point cloud, we calculate the distance between a point in the segment and the object which pose is estimated during the previous picking trial. If the distance is less than the threshold, we use the result of pose estimation during the previous pick. On the other hand, if the distance is larger than the threshold, we newly estimate the pose of an object by using two step algorithm using the CVFH and CRF estimation and the ICP algorithm. [] Fig. 6. Estimation of objects’ pose [] Fig. 7. Grid cells of captured point cloud 5 Experiment We performed experiments on bin-picking. As shown in Fig. 6(a), we randomly placed nine objects in a box. We put nine objects close to each other such that the finger contacts a neighboring object when picking the target one. In the experiment, we performed the picking trial for three times. After the three times picking trial, we additionally captured the visual information. Figure 7 shows the grid cells of captured point cloud during a series of picking tasks where the red cells include the newly captured point cloud while the green cells include the previously captured point cloud. We can see that object recognition is performed only for the object where red cells are included. Figure 8 shows the pose of 3D vision sensor during a series of picking task by using the dual-arm industrial manipulator HiroNX. [] Fig. 8. Pose of 3D sensor during a series of picking task 6 Conclusions In this paper, we discussed the visual recognition system for learning based randomized bin-picking. We first explained the view planning method to maximize the visibility of randomly stacked objects. Then, since randomized bin-picking usually estimates the pose of a number of objects, we relaxed the computational cost of the object pose detection by using the visual information on randomly stacked objects captured during the current picking task together with the visual information captured during the previous picking tasks. Through experimental results, we confirmed that the computational cost of the object recognition is reduced. Here, in our visual recognition of randomly stacked objects, we used the conventional Euclidian cluster based method to segment the stacked objects. Using more advanced method on segmentation is considered to be our future topic. Also, performing experiment for different shaped objects is also considered to be our future research topic. References 1. Domae, Y., et al.: Fast graspability evaluation on single depth maps for bin picking with general grippers. In: Proceedings of the IEEE International Conference on Robotics and Automation (2014) 2. Harada, K., et al.: Project on development of a robot system for random picking-grasp/manipulation planner for a dual-arm manipulator. In: Proceedings of the IEEE/SICE International Symposium on System Intergration (2014) 3. Dupuis, D.C., et al.: Two-fingered grasp planning for randomized bin-picking. In: Proceedings of the RSS 2008 Manipulation Workshop (2008) 4. Harada, K., et al.: Probabilistic approach for object bin picking approximated by cylinders. In: Proceedings of the IEEE International Conference on Robotics and Automation (2013) 5. Harada, K., et al.: Initial experiments on learning based randomized bin-picking allowing contact with neighboring objects. In: Proceedings of the IEEE International Conference on Automation Science and Engineering (2016) 6. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press, Cambridge (2005)MATH 7. Nagata, K., et al.: Picking up and indicated object in a complex environment. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2010) 8. Aldoma, A., Tombari, F., Rusu, R.B., Vincze, M.: OUR-CVFH – oriented, unique and repeatable clustered viewpoint feature histogram for object recognition and 6DOF pose estimation. In: Pinz, A., Pock, T., Bischof, H., Leberl, F. (eds.) DAGM/OAGM 2012. LNCS, vol. 7476, pp. 113–122. Springer, Heidelberg (2012). doi:10.​1007/​978-3-642-32717-9\_​12 CrossRef 9. Point Cloud Library. http://​pointclouds.​org © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_57 Mechanism and Control of Whole-Body Electro-Hydrostatic Actuator Driven Humanoid Robot Hydra Hiroshi Kaminaga¹  , Tianyi Ko¹  , Ryo Masumura¹  , Mitsuo Komagata¹  , Shunsuke Sato¹  , Satoshi Yorita¹   and Yoshihiko Nakamura¹   (1) Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo 113-8656, Japan     Hiroshi Kaminaga (Corresponding author) Email: kaminaga@ynl.t.u-tokyo.ac.jp URL: http://www.ynl.t.u-tokyo.ac.jp   Tianyi Ko Email: kang@ynl.t.u-tokyo.ac.jp URL: http://www.ynl.t.u-tokyo.ac.jp   Ryo Masumura Email: masumura@ynl.t.u-tokyo.ac.jp URL: http://www.ynl.t.u-tokyo.ac.jp   Mitsuo Komagata Email: komagata@ynl.t.u-tokyo.ac.jp URL: http://www.ynl.t.u-tokyo.ac.jp   Shunsuke Sato Email: sato@ynl.t.u-tokyo.ac.jp URL: http://www.ynl.t.u-tokyo.ac.jp   Satoshi Yorita Email: yorita@ynl.t.u-tokyo.ac.jp URL: http://www.ynl.t.u-tokyo.ac.jp   Yoshihiko Nakamura Email: nakamura@ynl.t.u-tokyo.ac.jp URL: http://www.ynl.t.u-tokyo.ac.jp Abstract Field robots are gaining attentions that they can perform the tasks where human cannot reach. Improvement on physical performance of robots is a fundamental issue. Backdrivability is a key mechanical feature of actuators that enables robust and stable interaction of robots with environment. We developed a humanoid robot HYDRA with backdrivable hydraulic actuators. Parallel mechanism is used extensively in multi-DOF joints to efficiently use actuator force. In this paper, mechanical structure of Hydra with backdrivability and mechanical strength is treated. Also, realtime control of hydraulic actuators that utilize backdrivability, and realtime control of robot joint systems with parallel kinematic chains are explained. Keywords Hydraulic actuatorParallel mechanismField robots H. Kaminaga–This work was supported by New Energy and Industrial Technology Development Organization (NEDO) The International R&D and Demonstration Project on Robotic Field / Research and Development of Disaster-Response Robot Open Platform (FY2014-FY2015). 1 Introduction In past years, robots have been engaged in operations in unstructured environment. Quince [1] and PackBot [2] are well known crawler type robots that have been deployed in surveillance tasks in the disaster site TEPCO Fukushima Daiichi nuclear power plant. Although wheel based and crawler based robots excel in mobility and energy efficiency in many cases, legged robots have advantages in moving through discontinuous rough terrain resembling steppingstones. In addition, humanoid robots have benefit of having minimum foot print compared to its reachable height and have possibility of making full use of human environment and tools. However, current robots are still far from the expectation. The problems on humanoid robots spans in various fields, with numerous works. In this paper, we mainly consider mechanical strength and controllability of the robot. Modern robots are mostly built using electric motors and gear drives. The major drawbacks of this construction comes from the gear drives: loss of backdrivability and low strength. Loss of backdrivability leads to loss of force controllability. Low strength leads to low reliability to be used for heavy duty applications. Direct drive motor driven robots are known to be backdrivable and don’t have issue of reliability, it is known that the mobility performance of those robots are limited due to their weight. From the advancements on hydraulic technology, hydraulic actuators are regaining the interests. Use of hydraulics gives a solution to issue on low strength of the actuators. Humanoid robots with hydraulic actuators can be listed as CB [3], Hydroid [4], Petman [5], Altas [6], and hydraulic humanoid developed in Ritsumeikan University [7]. Except for Hydroid, all of the robots listed have common hydraulic principle of using a high pressure pump and servovalves. Servovalve controlled hydraulic actuators are fundamentally non-backdrivable. Hydroid uses variable displacement pumps in hydrostatic actuators. They potentially have backdrivability, but their main intention is to store regenerative energy to accumulator in mechanical way. Our design is to use hydraulics as reducers to reduce transmission friction and realize compact and light actuators that can be scaled from miniature applications as hands, to large output application as limbs. In this paper, we explain development of a humanoid robot prototype with backdrivability and impact resistance. We explain mechanical structure of the humanoid robot Hydra, which is fully driven with hydraulic actuators. Design of the robot mechanism, controller, and their evaluations are reported. 2 Mechanical Design of Hydra Mechanical strength is an important property in mechanical design of a robot. A strong and impact resistant structure requires both the strength of actuator system and strength of linkage system. In this paper, we address these issues with combination of following methods (1) use of actuator with backdrivability, high impact resistance, and embedded self protection mechanism. (2) use of parallel-serial hybrid link mechanism to optimize load distribution. A electro-hydrostatic actuator (EHA) is one type of electric-hydraulic hybrid actuators. Hydrostatic transmission is used as a reducer, which is driven with servomotors. Pascal’s principle is the core principle of the speed reducing effect in an EHA, which realizes small transmission friction. With proper design, an EHA becomes backdrivable [8]. An EHA has benefits of both the electric motors and hydraulics: good controllability, high efficiency, no interference between actuators, and high strength. Figure 1 shows the revolute and prismatic type EHA configurations. Revolute configuration behaves as gear drives, and prismatic configuration behaves as lead screws. [] Fig. 1. Structure of electro-hydrostatic actuators [] Fig. 2. Actuators used in Hydra Modular design was used in the development. Three types of actuators were developed as shown in Fig. 2 and combined to construct a robot. Figure 4 shows construction of 1 DOF (degree of freedom), 2 DOF, and 3 DOF joints using type I and type II actuators. Connecting rod is attached to a structure called beam that is connected to and placed in parallel with a piston rod (see Fig. 3). Linear motion guide supports constraint force. Only the force in direction of piston rod is transmitted to realize thin diameter without buckling. This structure also has a benefit that the total length of the joint mechanism can be reduced compared to the case that the connecting rod is attached directly to the piston rod. [] Fig. 3. Structure of slider-crank 1 DOF parallel mechanism used in Hydra [] Fig. 4. Joint structure of various degree of freedom used in Hydra [] Fig. 5. Hydraulic humanoid robot Hydra. Left: outlook. Right: joint structure For 1 DOF joints, hinge type joints as knee and elbow use slider crank mechanism. It is beneficial that it can separate constraint force and joint torque. For inner shoulder joints and pronation joints, revolute joint was used to minimize the size. For multi-DOF joints, parallel mechanism was used to distribute load over multiple actuators. Maximum torque of a 2DOF parallel joint becomes double of that of a 2DOF serial joint. Universal joint type parallel mechanism was used to separate constraint forces and actuation forces. Figure 5 shows outlook and joint structure of the developed robot. Hydra is 1850 mm tall, and weighs approximately 130 kg without batteries. It is equipped with 41 DOF, which includes two 5 DOF hands. All the joints except for the neck joint use backdrivable EHA to realize robust and force sensitive actuation. Figure 6 shows the examples of 1 DOF, 2 DOF, and 3DOF joint actuation mechanisms used in Hydra. [] Fig. 6. Outlook of joint mechanism in Hydra 3 Closed Force Control of Electro-Hydrostatic Actuators In ordinary servo actuators, cascade controller is often used. In such controllers, the most inner loop is the current controller, which takes the control input from velocity and position controllers. PID control is used in many cases. [] Fig. 7. Structure of position control loop with pressure control inner loop. PID denotes proportional-integral-derivative controller and PI-D denotes PID control with process value derivative. Hydraulic actuation involves relatively large uncertainty in system parameter and dynamics, such as friction, bulk modulus, and viscosity. In this paper, we propose a cascaded actuator control that has pressure control loop, as shown in Fig. 7. The brushless DC motor is current controlled using Field orientation control [9]. The pressure controller gives the control input to current controller. Pressure controller regulates pressure acting on cylinder to the reference value given from outer loops. This is equivalent to controlling actuator output force since the pressure and the output force is proportional. Optionally this pressure control loop can be replaced by cylinder force control. The pressure control loop compensates unknown factors on hydraulics, such as pump mechanical friction and variation in viscosity. Now, we have outer position control loop as PID, but it can be replaced with any control system with reference force or torque as control input. For the actuators with poor backdrivability, force control needs to be performed with admittance control scheme. There are some successful examples of this method such as [10] that use very fast servovalves, but it is known that the control system tend to become unstable when the robots get in contact with rigid object. The proposed method utilizes backdrivability of EHAs and realizes force control with impedance control framework; the actuator takes torque as input and velocity as output. 4 Position Control of Parallel Link Joints Most path planning and gait pattern generator for humanoid robots assume robots with open kinematic chain. Most dynamics simulators also assume serial mechanism. In order to utilize existing efficient motion generation and simulation framework, we constructed a controller that converts an open kinematic chain robot (simulator model) to a robot with hybrid kinematics (Hydra). This is an approximation since parallel mechanism will have variable actuator apparent inertia due to varying moment arm, which would have effects on dynamics simulation. In this paper, we assume that the position tracking controllers have sufficient tracking performance. Under this assumption, effect of apparent inertia becomes negligible. We also assume that the mass of the joint mechanism is small enough so that it can also be neglected. In order to treat the robot as an open kinematic chain robot, we need to know the actuator position [$$z\_i$$] that corresponds to open kinematic chain joint angles [$$q\_j$$]. Let us consider the example case shown in Fig. 8. There are several variation in the 2 DOF joint structure that are used in Hydra, but the discussion here can be applied without loss of generality. Figure 8(b) shows the serial mechanism that we want to map the robot to. [] Fig. 8. Parameters of universal joint 2 DOF parallel mechanism Let us put roll angle (around x axis) and pitch angle (around y axis) of the mechanism as [$$q\_\mathrm {r}$$] and [$$q\_\mathrm {p}$$] respectively. For this type of mechanism, we can solve the actuator position [$$\varvec{z} = \left[ z\_1, \; z\_2\right] ^T$$] from joint angle [$$\varvec{q}=\left[ q\_\mathrm {r}, \; q\_\mathrm {p} \right] ^T$$] in closed form. The reverse calculation generally requires solution of nonlinear equations. Now, we take origin of the coordinate at center of the universal joint. Position of the lower end of the connecting rod [$$\varvec{z}\_1$$] and [$$\varvec{z}\_2$$] are chosen as follows: [$$\begin{aligned} \varvec{z}\_1=\left[ -a, \,\, b, \,\, h -l + z\_1\right] ^T, \,\, \varvec{z}\_2=\left[ -a, \,\, -b, \,\, h -l + z\_2\right] ^T \end{aligned}$$] (1) Here, l is the length of the connecting rod, a, b, and h are the parameters of the link structure shown in Fig. 8. Upper end position of the connecting rod, [$$\varvec{p}\_1$$] and [$$\varvec{p}\_2$$] are given as follows using [$$R\_j$$] being a rotation matrix around axis [$$j\in \left\{ \mathrm {x}, \mathrm {y}\right\} $$] and [$$\varvec{p}\_{10}$$] and [$$\varvec{p}\_{20}$$] being position of [$$\varvec{p}\_1$$] and [$$\varvec{p}\_2$$] when [$$z\_1 = z\_2 =0$$] respectively. [$$\begin{aligned} \varvec{p}\_i= & {} R\_\mathrm {y}(q\_\mathrm {p})R\_\mathrm {x}(q\_\mathrm {r})\varvec{p}\_{i0}\end{aligned}$$] (2) [$$\begin{aligned} \varvec{p}\_{10}= & {} \left[ -a, \,\, b, \,\, h \right] ^T, \,\, \varvec{p}\_{20}=\left[ -a, \,\, -b, \,\, h \right] ^T \end{aligned}$$] (3) Solving these equations under constraint of [$$\begin{aligned} || \varvec{p}\_i - \varvec{z}\_i || = l \end{aligned}$$] (4) gives the solution of [$$z\_1$$] and [$$z\_2$$] as follows. Using the geometric relations, [$$z\_1$$] and [$$z\_2$$] can be solved as follows: [$$\begin{aligned} z\_1= & {} a C\_\mathrm {r}S\_\mathrm {p} + b S\_\mathrm {r} + h C\_\mathrm {r}C\_\mathrm {p} - \sqrt{A} \end{aligned}$$] (5) [$$\begin{aligned} z\_2= & {} a C\_\mathrm {r}S\_\mathrm {p} - b S\_\mathrm {r} + h C\_\mathrm {r}C\_\mathrm {p} - \sqrt{B} \\ A= & {} l^2 - (-a + a C\_\mathrm {p} - h S\_\mathrm {p})^2 - (b + aS\_\mathrm {r}S\_\mathrm {p} - b C\_\mathrm {r} + h S\_\mathrm {r}C\_\mathrm {p})^2 \nonumber \\ B= & {} l^2 - (-a + a C\_\mathrm {p} - h S\_\mathrm {p})^2 - (-b + a S\_\mathrm {r}S\_\mathrm {p} + b C\_\mathrm {r} + h S\_\mathrm {r}C\_\mathrm {p})^2 \nonumber \end{aligned}$$] (6) Here we used the notations of [$$C\_i=\cos (q\_i)$$] and [$$S\_i=\sin (q\_i)$$]. (5) and (6) can be used to calculate reference position of EHAs [$$\varvec{z}^\mathrm {ref}$$] when a reference joint angles [$$\varvec{q}^\mathrm {ref}$$] is commanded from a motion generator. To calculate the joint positions [$$\varvec{q}$$] from the actuator positions [$$\varvec{z}$$], we first calculate Jacobian matrix [$$G = \partial \varvec{z} / \partial \varvec{q}$$], which can be calculated from (5) and (6) in closed form. Using G, [$$\varvec{q}$$] can be obtained by Newton’s method to solve nonlinear relations in iterative manner. In the range of motion of 2 DOF joints in Hydra, there is no singular point in parallel mechanism. This ensures [$$\det G \ne 0$$], hence existence of [$$G^{-1}$$]. We can calculate differential relation between [$$\varvec{q}$$] and [$$\varvec{z}$$] as follows: [$$\begin{aligned} \dot{\varvec{q}} = G(\varvec{q})^{-1} \dot{\varvec{z}} \end{aligned}$$] (7) Ordinary differential kinematics algorithm can be used to obtain [$$\varvec{q}$$] that corresponds to given [$$\varvec{z}$$]. This calculation is used to express robot’s posture as open kinematic chain robot, which is used in robot status visualizer. The obtained Jacobian matrix G can also be used convert joint torques [$$\varvec{\tau }$$] and actuator forces [$$\varvec{f}$$]. With virtual work principle, joint torques [$$\varvec{\tau }$$] can be calculated from actuator forces [$$\varvec{f}$$], which can be measured at the actuator as follows. [$$\begin{aligned} \varvec{\tau } = G^T \varvec{f} \end{aligned}$$] (8) 5 Software Implementation Hydra has 47 actuators, force / torque sensor on each foot, and IMU. To realize high speed control, a distributed control system was developed. In the lowest layer, motor current control is implemented on FPGAs to realize high speed hard realtime control. EHA controller proposed in Sect. 3 was implemented on microcontroller units (MCUs). One MCU was allocated to every two or three EHAs. Communication between FPGAs and MCU was done with high-speed LVDS. All MCUs communicate with realtime PC with EtherCAT. Parallel-serial kinematic conversion explained in Sect. 4 is done on the realtime PC. We used Ubuntu Linux with RT-preempt kernel on realtime PC to realize soft realtime system. The synchronization of the data between all the MCUs and the realtime PC is done in every 1ms. Both [$$\varvec{q}\rightarrow \varvec{z}$$] and [$$\varvec{z}\rightarrow \varvec{q}$$] are calculated in 1ms. Figure 9 shows layers of the controllers and software functionality implemented for Hydra. [] Fig. 9. Layers of software used in whole body control of Hydra [] Fig. 10. Comparison of joint position control with pressure feedback 6 Experiments In order to evaluate control method of EHAs, a position tracking test was conducted. An EHA was fixed on a horizontal flat table and a inertial load of 30 kg was connected to the connecting rod. A square wave with 0.5 mm peak-peak was selected as a reference position (Fig. 10). Figure 7 shows the position tracking result for position control with and without pressure controller as inner loop control. Root-mean-square of the tracking error was 0.109 mm for no pressure loop case and 0.050 mm for control with pressure inner loop control. The result of the pressure inner loop control excelled the case without by 54%. 7 Conclusions In this paper, we introduced the mechanical design and control method of the humanoid robot Hydra. The use of EHAs improves backdrivability, force sensitivity, and robustness of the actuators fundamentally. Parallel actuation were used extensively to realize efficient use of actuators and separation of constraint force and actuator force that improves reliability. In order to treat the hybrid kinematic chains used in Hydra, we proposed the control method that converts the robot model to equivalent serial kinematic chain. This conversion enables us to use simulators and motion planners developed for serial mechanism robots. The proposed method was implemented on a realtime host controller and was confirmed to perform the conversion in every 1ms. To cope with uncertainties of hydraulics, an EHA control system that has pressure controller as an inner loop was proposed. The proposed method showed 54% improvement of position tracking even under large inertial load, which showed the robust and high performance actuator control system. References 1. Nagatani, K., Kiribayashi, S., Okada, Y., Tadokoro, S., Nishimura, T., Yoshida, T., Koyanagi, E., Hada, Y.: Redesign of rescue mobile robot Quince. In: International Symposium on Safety, Security and Rescue Robotics, pp. 13–18 (2011) 2. Yamauchi, B.M.: PackBot: a versatile platform for military robotics. In: SPIE Proceedings, vol. 5422, pp. 228–237 (2004) 3. Cheng, G., Hyon, S.H., Morimoto, J., Ude, A., Colvin, G., Scroggin, W., Jacobsen, S.C.: CB: a humanoid research platform for exploring neuroscience. In: IEEE-RAS International Conference on Humanoid Robots, pp. 182–187 (2006) 4. Alfayad, S., Ouezdou, F.B., Namoun, F., Cheng, G.: High performance integrated electro-hydraulic actuator for robotics part I: principle, prototype design and first experiments. Sens. Actuators A Phys. 169(10), 115–123 (2011)CrossRef 5. Nelson, G., Saunders, A., Neville, N., Swilling, B., Boundaryk, J., Billings, D., Lee, C., Playter, R., Raibert, M.: PETMAN: a humanoid robot for testing chemical protective clothing. J. Robot. Soc. Jpn. 40(4), 372–377 (2012)CrossRef 6. Boston Dynamics: Atlas - The Agile Anthropomorphic Robot. http://​www.​bostondynamics.​com/​robot\_​Atlas.​html 7. Hyon, S., Suewaka, D., Torii, Y., Oku, N., Ishida, H.: Development of a fast torque-controlled hydraulic humanoid robot that can balance compliantly. In: IEEE-RAS International Conference on Humanoid Robots, pp. 576–581 (2015) 8. Kaminaga, H., Ono, J., Nakashima, Y., Nakamura, Y.: Development of backdrivable hydraulic joint mechanism for knee joint of humanoid robots. In: IEEE International Conference on Robotics and Automations, pp. 1577–1582 (2009) 9. Schauder, C.D., Caddy, R.: Current control of voltage-source inverters for fast four-quadrant drive performance. IEEE Trans. Ind. Appl. 18(2), 163–171 (1982)CrossRef 10. Boaventura, T., Focchi, M., Frigerio, M., Buchli, J., Semini, C., Medrano-Cerda, G.A., Caldwell, D.G.: On the role of load motion compensation in high-performance force control. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4066–4071 (2012) Planning and Control © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_58 Gait Synthesis for Modular Soft Robots Scott Hamill¹  , Bryan Peele¹, Peter Ferenz¹, Max Westwater¹, Robert F. Shepherd¹ and Hadas Kress-Gazit¹ (1) Cornell University, Ithaca, USA     Scott Hamill Email: sbh92@cornell.edu Abstract Soft robots present a new opportunity for designing robots that can be produced quickly (on the order of hours), are capable of a variety of motions and behaviors, and are able to address a wide range of environments and tasks. The large design space of soft actuators can be leveraged to rapidly generate libraries of robotic components that can be used to compose modular soft robotic systems. To take full advantage of the large design space, we must have techniques for automatically synthesizing soft robot motions and behaviors. In this work, we develop a method for synthesizing gaits for walking soft robots, and show experimental results demonstrating synthesized gaits. 1 Background and Introduction Current Research in Soft Robots. Recent research has produced soft, fluidic actuators and robots capable of walking, gripping, and manipulation using various methods of manufacture [1–4]. Soft robots have been constructed as integrated [5] or modular designs [6], some of which utilize hard components either as an extension of the robot [7], or as internal components allowing for modular attachments [8]. Soft robots and actuators can be damage resistant and robust to a variety of hostile environments [9, 10]. These features allow for a large design space for soft robotic systems. Modular Soft Robots. The expansive design space and rapid manufacturability of soft robots allow designers to quickly explore actuator designs (libraries of components) suitable for a wide range of tasks and behaviors. In addition, soft actuators often have high rates of failure and modular systems allow for rapid replacement of failed components as opposed to manufacturing a new robot. Previously developed modular soft robots, while versatile, have not demonstrated motion synthesis. Controlling Soft Robot Locomotion. Synthesizing motions and controllers for soft robots can be difficult as soft actuators are difficult to model. Previously developed control methods include using closed loop control for a planar soft robotic manipulator [11] moving around obstacles, and open loop trajectory optimization methods for a soft robot manipulator executing a grasping task [12]. However, these methods are not directly applicable to actuators for walking robots as they do not consider ground contact forces. Soft robots are capable of locomotion using a variety of gaits, but usually these gaits and resulting controllers are determined empirically for specific robotic platforms, such as the walking gaits in [5]. Other examples include serpentine motions [6] and caterpillar inspired rolling motions [13]. Examples of soft robot locomotion also include full body rolling motions [14], in this case using sensor feedback to inflate sections of a tank-tread like robot. Leveraging the expansive design space of soft actuators to quickly address autonomous tasks requires the ability to synthesize a motion and controller for a composed soft robot. In this work, we develop a method for estimating motions and synthesizing gaits for walking robots utilizing soft actuators, design and characterize an example soft actuator, and present experimental results for gaits synthesized for two composed robots. 2 Technical Approach Soft, fluidic actuators are controlled by regulating the pressure in each pressure chamber of the actuator - the geometry of the actuator changes depending on the pressure of each chamber, thereby providing some force and torque. In this work we assume the pressure chamber of each soft actuator is controlled by a three-way solenoid valve, exposing the chamber to a common rail air pressure when open, or exposing the chamber to atmospheric pressure when closed, venting the chamber. Each valve setting is considered to be a binary value - the valve is open or closed - and the set of valve settings across all of the pressure chambers of all actuators on the robot is referred to as the valve state, s. For a given valve state, the robot assumes a discrete configuration when all of the actuator chamber pressures reach a steady state value. We assume the geometry of each actuator at steady state is known, and by analyzing the actuator geometry for each individual configuration, we can compare two different configurations and estimate the motion the robot may undergo by transitioning between the configurations. The approach taken in this work is to evaluate the discrete configurations of the robot and to then determine a sequence of configurations that is expected to result in some robot motion. Defining Robot Poses and Transitions as a Graph. A soft robot with n pressure chambers across all actuators has [$$2^n$$] discrete valve states. However, the robot configuration for a given valve state may have more than one stable pose, as shown in Fig. 1(a–b), depending on which actuators are in contact with the ground surface. A pose is stable if the projection of the robot center of gravity (CG) lies within the polygon defined by the endpoints of those actuators in contact with the ground surface, referred to as the contact polygon, Fig. 1(c–f). We assign each individual actuator on the robot an integer index, i, and we define the pose, p, to be the set of indices of those actuators in contact with the ground surface. For example, if the actuators in Fig. 1(c) are labeled [$$\{1,2,3,4,5,6\}$$] clockwise starting from the top right actuator, [$$p = \{1,3,4,6\}$$]. We define the chassis rotation and translation that may occur when transitioning between poses as r and t, respectively, both defined as vectors in a body fixed reference frame. We assume that each actuator can be modeled as a rigid structure and that there is no slip between the actuators and ground surface. We then construct a graph, [$$G = (V,E)$$] of vertices and edges, where each vertex represents a unique pose the associated valve state, [$$v = \{p,s\}$$], and each edge represents a possible transition between two poses, [$$e = \{v,v',r,t\}$$]. [] Fig. 1. Side view of robot showing two different poses (a), and (b) for a single robot configuration. Top down view of the robot: pose 1 (c), pose 2 (d), poses 1 and 2 with contact polygons outlined in black (e–f), poses 1 and 2 with shared contact polygons outlined in red (g–h). Robot center of gravity (CG) is shown in green. Endpoints of actuators in contact with the ground are circled in red. Estimating Robot Motion for a Transition. The robot chassis rotation, r, and translation, t, that occur during a transition between poses as defined by a graph edge e may be estimated by comparing the geometry of those actuators that remain in contact with the ground surface during the transition. The set of shared contact indices, k, is defined as the set of actuator indices common to the poses of two graph vertices for a given edge: [$$k = \{ p \bigcap p'\}$$]. For a given graph edge, the motion of the robot can be estimated by evaluating the pre and post transitions polygons defined by the endpoints of the actuators in k - the shared contact polygons. An example of the shared contact polygons is shown in Fig. 1(g–h). We use the Kabsch algorithm [15] to determine the optimal rotation matrix, R, for mapping the pre and post transition shared contact polygons defined by k for each pose. The chassis rotation angles, r are determined from R, and the chassis translation vector, t, is determined from the centroids of the pre and post transition shared contact polygons. The algorithm used in this work for building the graph is described in Algorithm 1. A transition between vertices is added if either of the following two conditions are met: 1. 1. (Condition 1) The pre and post transition robot poses are stable with respect to the shared contact polygon, i.e. the robot remains stable with respect to those actuators that remain in contact with the ground before and after the transition.   2. 2. (Condition 2) The projection of the center of gravity of the robot is estimated to move outside of the contact polygon of the pre-transition pose and into the contact polygon of the post-transition pose.   Robot Control and Gait Synthesis. Controlling the robot is a process of selecting a series of valve states that results in a sequence of robot poses such that the robot chassis achieves a desired motion. The valve state is determined at fixed time steps and the pressure of the actuator chambers is allowed to reach equilibrium before transitioning to the next valve state. A cycle of valve states can be determined by searching for cycles over the graph G. This is done by assigning weights to the graph edges in E and performing a shortest path graph search. Edge weights are determined by first selecting a graph edge, [$$e' = \{v,v',r',t'\}$$] with a desirable motion as determined by a user and then comparing the chassis motion of each of the other edges [$$e^{\*} \in E \ne e'$$] to the candidate edge. In this work, given [$$e'$$], the edge weight assigned to edge [$$e^{\*} = \{v^{\*},v^{\*\*},r^{\*},t^{\*}\}$$] is [$$(1-cos(\alpha ))^4)$$], where [$$\alpha $$] is the angle between the vectors [$$t'$$] and [$$t^{\*}$$]. In the case where [$$\alpha $$] is zero, the edge is assigned an arbitrarily small number, in this case 0.0001, and edges where [$$t^{\*} = 0$$] (no chassis motion) are also given an arbitrarily small number, in this case 0.01. However, methods of weight assignment are tunable parameters. A sequence of graph vertices can then be found by temporarily removing the edge [$$e'$$] from the graph and performing a shortest path graph search between v and [$$v'$$]. The poses of the returned sequence of graph vertices are the gait and the returned sequence of valve states is the control input for the robot. All control is open loop - no sensing or feedback is provided from the actuators. [] 3 Example Actuator The graph and gait synthesis algorithms are general and can accommodate any actuator. To demonstrate the gait synthesis algorithm we developed a soft actuator and several chassis to enable experimenting with different robots. Example Actuator Design. The actuators we designed for this work are cast from an elastomer material (Elastosil [$$^{\textregistered }$$] M 4601 silicone rubber) and contain four, parallel, radially symmetric pressure chambers. The actuator deforms as the various chambers are pressurized, causing the actuator to curve away from the pressurized chamber(s). The actuator mold and body are shown in Fig. 2. A completed actuator is approximately 95 mm in length and weighs approximately 34 grams. This design is similar to the three chambered actuator described in [4], but does not rely on a dedicated strain limiting material and requires fewer steps to fabricate. [] Fig. 2. (a) Exploded view of actuator mold design (actuator body model shown in purple), (b) image of completed mold body and end caps, pre final assembly. All pressure generation, regulation, and valving are off-board. It is assumed that the actuators conform to a constant radius of curvature when pressurized, characterized by a constant of curvature k and an angle [$$\phi $$], shown in Fig. 3(a–c). The actuator in Fig. 3(b) has one chamber inflated. The green circles in Fig. 3(a), (b) and (d) are visual reference markers for evaluating actuator curvature. An example robot, shown in Fig. 3(d), is comprised of six actuators attached to a symmetric, extruded ABS plastic chassis. An example robot with an asymmetric chassis is shown in Fig. 3(e). Rigid, non-soft chassis components are used in this work in order to simplify the analysis of the robot geometry. [] Fig. 3. (a–b) Pneumatic actuator (Elastosil [$$^{\textregistered }$$] M 4601 silicone rubber), (c) actuator geometry, (d-e) example chassis assemblies. Actuator Characterization. Elastomeric materials experience stress softening - as each actuator chamber is repeatedly pressurized to a stress level not previously experienced, the stiffness of the material decreases and the actuated constant of curvature increases (the Mullins effect) [16]. Gait synthesis is dependent upon the ability to estimate transition motions, and, as a result, actuator performance parameters and stress softening behavior must be characterized before a gait can be accurately determined. We conducted a set of experiments in order to characterize the stress softening behavior of the actuators used in this work. We conducted these experiments both on actuators that were tested without prior pressurization, and actuators that had been pre-pressurized (inflated approximately 20 times before testing). In each experiment, each pressure chamber of an actuator was pressurized in sequence repeatedly. The solenoid valve for each pressure chamber was opened for 10 seconds, exposing the chamber to the common rail pressure of 93 kPa, and then closed for 10 seconds allowing the chamber to vent to atmospheric pressure. The curvature of the actuator was recorded using a Vicon motion capture system. Figure 4(a) shows the curvature data for a single actuator without pre-pressurization. [] Fig. 4. Example actuator pressure-curvature data: (a) stress softening of an unprepared actuator, (b) constant performance of “pre-pressurized” actuator. The data indicate that the rate of stress softening is significant in the first few cycles and then quickly decreases. However, the rate of stress softening does not appear to ever decrease to zero - the constant of curvature of the actuator under steady-state pressurization continued to increase slowly during all experiments. The curvature data for an actuator prepared by pre-pressurization is shown in Fig. 4(b). In these experiments, each chamber of the actuators was first pressurized approximately twenty times to approximately 97 kPa. The data indicate that the over-pressurization process mitigates the stress softening behavior at stress values below the maximum previously experienced by the actuator, as predicted by the Mullins effect. As a result, we expect actuators pressurized using this method to maintain constant performance when utilized on a robot. 4 Gait Synthesis Experiments and Results As the size of the graph of all possible valve configurations grows exponentially with the number of valves, a method is needed for rapidly generating graph states and searching for gaits without needing to explore the entire state space. We implemented two different methods of graph generation and gait search - one involving random state generation and one involving searching for states around a graph edge of interest. Random State Generation. The first method of synthesizing a gait involves building the graph randomly. Unique valve states are generated randomly and added to the graph, and the gait search process is attempted after 100 unique vertex additions. If the graph has sufficient connectivity and a cycle is found, the sequence of poses and valve states are returned as a feasible gait. Gaits returned by this method were evaluated on the hexapod robot shown in Fig. 3(d). We observed that the assumptions about configurations and poses did not hold - actuator compliance and the unmodeled friction between the actuators and ground surface resulted in the robot routinely failing to make the pose transitions predicted by the returned gait, thereby invalidating the gait. We observed that friction and actuator compliance caused the robot to assume poses not included in the gait sequence - the robot would attempt to transition from one pose to another but fall, or “sag”, into a different pose, or actuators that were assumed to be free from ground contact remained in contact due to actuator compliance. Building Gaits Around Single Graph Edges. The second method involves finding a graph edge with a desirable motion (translation or rotation), [$$e' = \{v',v'',r',t'\}$$], and then building a new graph, [$$G' = (V',E')$$], where [$$v',v'' \in G'$$], such that only edges with zero translation and rotation values are added. When the shortest path graph search is implemented on [$$G'$$], all edge weights are assigned an identical weight of 1, thereby causing the graph search to return a path with the smallest possible number of vertices if a path exists. If no path exists, further valve states (and subsequently further poses) are added to the graph [$$G'$$]. By implementing this method, referred to as the reset method, the returned pose sequence includes the motion of interest (from [$$v'$$] to [$$v''$$]) and then a sequence of edge transitions that allow the robot to “reset” the actuators to the initial pose without inducing chassis motion. The main difference between the two methods is that the reset method builds a sparsely connected graph - the graph search process only evaluates pose transitions with no associated chassis motion. Experiments and Results. We evaluated gaits synthesized using the reset method for two different robot chassis, one symmetric (Fig. 3(d)) and one asymmetric (Fig. 3(e)). Each chassis included six of the example actuators. Two different synthesized gaits were demonstrated on the symmetric hexapod, one involving five valve states and poses and one involving nine valve states and poses. One synthesized gait involving six valve states and poses was demonstrated on the asymmetric robot. The five state pose sequence for the symmetric robot is shown in Fig. 5. A video of this sequence can be found at https://​www.​youtube.​com/​watch?​v=​GNiNd51Hp-A. We used a Vicon motion capture system to record the motion of each demonstrated gait. In each experiment, we recorded five runs of the robot executing the gait sequence approximately six times sequentially. The robot began each run in the same location and orientation. The recorded x/y position of the center of the robot chassis for each experiment is shown in Fig. 6. [] Fig. 5. Five pose gait, symmetric robot. The main transition of the edge of interest (a–b), chassis moving right to left. The sequence of four reset transitions (c–e) “reset” the actuators back to the initial pose (f). [] Fig. 6. Recorded chassis motions: (a) five pose sequence, symmetric hexapod, (b) nine pose sequence, symmetric hexapod, (c) six pose sequence, asymmetric hexapod The transitions for both the symmetric and asymmetric hexapod robots were estimated to propel the robot forward (in the x direction as shown in Fig. 6) approximately 6 cm - one fifth of a body length per cycle in the case of the symmetric robot, and one third of a body length per cycle in the case of the asymmetric robot. It should be noted that several actuators experienced ruptured chambers during the gait demonstration tests, and, as such, the actuator sets were inconsistent between runs. The motions for each run of the nine sequence (symmetric robot) and six sequence (asymmetric robot) were consistent - the trajectories were similar. The motion for the five sequence gait, however, was not consistent as the trajectories varied between runs. This is likely due to the shorter number of poses in the five pose sequence. By “resetting” the actuators in fewer poses, the robot typically had fewer actuators in contact with the ground surface during the reset poses, which likely increased the impact of actuator compliance and ground friction on the pose of the robot. The consistency of the trajectories of the asymmetric robot, despite the relatively low number of sequence poses, is likely due to the geometry of the chassis. During the experiments we observed the asymmetry of the actuators improved the ability of the robot to maintain its pose during the reset transitions, i.e. the robot was better able to “balance” during the reset transitions. 5 Discussion and Future Work In this paper we presented (1) a method for synthesizing gaits for a walking robot using soft actuators, (2) experimental data on soft actuator performance, and (3) experimental data for synthesized gaits for both symmetrical and asymmetrical robots. Discussion. The gait synthesis process is highly sensitive to the assumptions made about actuator rigidity and friction between the actuators and ground surface. The gait synthesis method based on random state generation returns unreliable gaits as many of the assumptions about actuator rigidity and surface friction do not hold - the robot fails to achieve the sequence of poses in order as predicted by the returned gait. Implementing a gait synthesis process that relies on “reset” transitions around a graph edge of interest is capable of producing viable gaits as it is less sensitive to the issues of actuator stiffness and surface friction. In addition, this gait synthesis process is capable of synthesizing motions for asymmetric robots, which indicates that this process may be scalable to robots of arbitrary chassis and actuator designs. However, not all returned gaits proved consistent in performance - the motions of two of the demonstrated gaits were repeatable while the motion of the third gait was not. The gait synthesis methods presented in this work and demonstrated on the composed robots are open loop - no sensing or feedback is provided. Knowledge of the pose of the robot would improve the ability to implement a synthesized gait. Implementing even minimal sensing capabilities, such as a simple true/false contact sensor, into the soft actuators will provide pose information and will thereby improve the gait implementation process. Future Work. Further work on gait synthesis will focus on implementing pressure sensors into the soft actuators that will provide the ability to determine which actuators are in contact with the ground surface. This will allow the robot to determine its true pose, thereby allowing the robot to detect deviations from a synthesized gait and to adapt accordingly. This capability will help to alleviate the issues with actuator rigidity and ground friction assumptions. If a robot fails to make a desired transition and instead detects an unpredicted pose - one not included in the synthesized gait - the robot may be able to either adapt the current gate to include the detected pose, or synthesize a new gait that includes the detected pose. This capability will allow the robot to adapt to variation in actuator performance and issues with actuator stiffness and ground friction without sacrificing locomotive capability. Acknowledgments This work was supported by NSF ExCAPE and National Science Foundation Graduate Research Fellowship Grant No. DGE-1144153. References 1. Zhao, H., Li, Y., Elsamadisi, A., Shepherd, R.: Scalable manufacturing of high force wearable soft actuators. Extreme Mech. Lett. 3, 89–104 (2015)CrossRef 2. Ilievski, F., Mazzeo, A.D., Shepherd, R.F., Chen, X., Whitesides, G.M.: Soft robotics for chemists. Angew. Chem. Int. Ed. 50(8), 1890–1895 (2011)CrossRef 3. Peele, B.N., Wallin, T.J., Zhao, H., Shepherd, R.F.: 3D printing antagonistic systems of artificial muscle using projection stereolithography. Bioinspir. Biomim. 10(5), 055003 (2015)CrossRef 4. Waynelovich, J., Frey, T., Baljon, A., Salamon, P.: Versatile and dexterous soft robotic leg system for untethered operations. Soft Robot. 3(2), 64–70 (2016)CrossRef 5. Shepherd, R.F., Ilievski, F., Choi, W., Morin, S.A., Stokes, A.A., Mazzeo, A.D., Chen, X., Wang, M., Whitesides, G.M.: Multigait soft robot. Proc. Nat. Acad. Sci. 108(51), 20400–20403 (2011)CrossRef 6. Onal, C.D., Rus, D.: A modular approach to soft robots. In: 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Rome, pp. 1038–1045 (2012) 7. Stokes, A.A., Shepherd, R.F., Morin, S.A., Ilievski, F., Whitesides, G.M.: A hybrid combining hard and soft robots. Soft Robot. 1(1), 70–74 (2013)CrossRef 8. Kwok, S.W., Morin, S.A., Mosadegh, B., So, J.-H., Shepherd, R.F., Martinez, R.V., Smith, B., Simeone, F.C., Stokes, A.A., Whitesides, G.M.: Magnetic assembly of soft robots with hard components. Adv. Funct. Mater. 24(15), 2180–2187 (2014)CrossRef 9. Shepherd, R.F., Stokes, A.A., Nunes, R.M.D., Whitesides, G.M.: Soft machines that are resistant to puncture and that self seal. Adv. Mater. 25(46), 6709–6713 (2013)CrossRef 10. Martinez, R.V., Glavan, A.C., Keplinger, C., Oyetibo, A.I., Whitesides, G.M.: Soft actuators and robots that are resistant to mechanical damage. Adv. Funct. Mater. 24(20), 3003–3010 (2014)CrossRef 11. Marchese, A.D., Katzschmann, R.K., Rus, D.: Whole arm planning for a soft and highly compliant 2D robotic manipulator. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, pp. 554–560, 14–18 September 2014 12. Marchese, A.D., Tedrake, R., Rus, D.: Dynamics and trajectory optimization for a soft spatial fluidic elastomer manipulator. In: IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, pp. 2528–2535 (2015) 13. Lin, H.-T., Leisk, G.G., Trimmer, B.: Goqbot: a caterpillar-inspired soft-bodied rolling robot. Bioinspir. Biomim. 6(2), 026007 (2011)CrossRef 14. Correll, N., Onal, C.D., Liang, H., Schoenfeld, E., Rus, D.: Soft autonomous materials using active elasticity and embedded distributed computation. In: The 12th International Symposium on Experimental Robotics, pp. 227–240 (2014) 15. Umeyama, S.: Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 13(4), 376–380 (1991)CrossRef 16. Mullins, L.: Softening of rubber by deformation. Rubber Chem. Technol. 42(1), 339–362 (1969)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_59 Discovering and Manipulating Affordances R. Omar Chavez-Garcia¹  , Mihai Andries¹  , Pierre Luce-Vayrac¹   and Raja Chatila¹   (1) Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Universités, UPMC Univ. Paris 06, CNRS, 75005 Paris, France     R. Omar Chavez-Garcia (Corresponding author) Email: chavez@isir.upmc.fr   Mihai Andries (Corresponding author) Email: andries@isir.upmc.fr   Pierre Luce-Vayrac Email: luce-vayrac@isir.upmc.fr   Raja Chatila Email: raja.chatila@isir.upmc.fr Abstract Reasoning jointly on perception and action requires to interpret the scene in terms of the agent’s own potential capabilities. We propose a Bayesian architecture for learning sensorimotor representations from the interaction between perception, action, and salient changes generated by robot actions. This connects these three elements in a common representation: affordances. In this paper, we are working towards a richer representation and formalization of affordances. Current experimental analysis shows the qualitative and quantitative aspects of affordances. In addition, our formalization motivates several experiments for exploring hypothetical operations between learned affordances. In particular, we infer affordances of composite objects, based on prior knowledge on the affordances of elementary objects. Keywords AffordancesSensorimotor representationsDevelopmental robotics 1 Motivation, Problem Statement The grounding of robotic knowledge [19] is the problem of creating links between the entities and events in the observable environment and their symbolic representations employed by a robot’s reasoning algorithms. Solving this problem would allow robots to autonomously discover their environment, without the need of human intervention. Symbolic grounding cannot be achieved by a process of observation alone, and requires interaction between the agent and its environment. In this paper, we study, develop, and experimentally evaluate sensorimotor representations and scene interpretation processes based on visual and proprioceptive inputs when the robot physically interacts with objects. This enables robots to understand their environment by interacting with it. Our architecture builds models of objects based on perceptual clues and effects of robot actions on them, which relate to the notion of affordance. We employ a Bayesian network that represents with continuous and discrete variables the objects, actions, and effects in the observable environment. We then perform structure learning to identify the most probable Bayesian network that best fits with the observed data. The discovered structure of the Bayesian network allows the robot to discover causal relationships in the environment using statistical data. The remainder of the paper is structured as follows. In Sect. 2 we discuss related work on the discovery of object affordances, and we introduce our specific contribution. Section 3 describes our technical approach, including an illustration of the architecture employed for learning affordances. Experimental results are presented in Sect. 4. We draw a conclusion Sect. 5 and present ideas for future work. 2 Related Work on Object Affordance Discovery From the seminal work of Gibson, the affordance of anything is a specific combination of the properties of its substance and its surfaces taken with reference to an animal [1]. Sahin et al. discuss on the former definition for the domain of autonomous robot control. They introduce the acquired aspect of an affordance, such that when an agent applies a behavior on an entity, an effect is generated [2]. At the same time, several efforts have spawned from the domain of developmental robotics, for exploring and learning robots’ environment. The approach was based on a cycle of exploration-manipulation, initialized with a collection of minimal knowledge and innate capabilities. These works studied the discovery of meaningful discrete motion primitives [10] or sequences thereof [8], using stochastic and deterministic [16] approaches. This allowed a robot to learn object affordances and the predictors that anticipate the effects that these action primitives create. Stoytchev [4] suggested that the autonomous learning of affordances by a robot provides representations of the observed objects, actions, and effects that are grounded in the environment. This hinted that the affordances can be used to create grounded symbolic representations for the observed entities and events, both in the physical and abstract world. Montesano et al. [3] modeled affordances with Bayesian networks. The unsupervised learning of affordances was formulated as a structure learning algorithm, where affordances were encoded in the probabilistic relations between actions, object features and effects. Hermans et al. [6] suggested to learn affordances in 2 steps: first generating object attributes from the observed visual features, and then linking these object attributes to affordances. In their case, they employed 2D visual features of objects (shape, color, material) and weight features to learn and predict affordances such as pushable, rollable, graspable, liftable, dragable, carryable, and traversable. Similarly, Jain et al. attempted to estimate the affordances of previously unknown tools, based on the assumption that functional features remain distinctive and invariant across different tools used to perform similar tasks [7]. The proposed system learns bi-directional causal relationships between actions, functional features and the effects of tools, and uses a Bayesian network to model the probabilistic dependencies in the observation data. Zhu et al. [9] inferred the affordances of objects (with a particular interest for tools) based on their resemblance to other objects observed while in use by a human during a learning phase, using RGB-D data. They made the hypothesis that the object use demonstrated by the human was optimal. The main novelty of this paper consists in predicting the affordances of combinations of objects, based on prior knowledge on the affordances of the constituent parts of the composite object. We employ a probabilistic architecture, that generates a sensorimotor representation which encodes, through the learning of affordances, effects, objects and actions using the same formalism. The architecture spans from low-level data acquired from sensors and actuators, up to learning relations between higher-level representations. We use 3D visual features, as well as force measurements to create a description of the objects and effects generated through interactions with objects. Although we assume that the agent has a limited innate set of sensors and motor capabilities, the architecture allows for learning and extending these capabilities as well. We employed a continuous Bayesian network (as opposed to discrete Bayesian networks) to work with the quantitative aspect of affordances (i.e. to measure, learn and predict intensities of effects). 3 Technical Approach Figure 1 shows the proposed architecture for learning affordances. Measurements from Visual perception and Environment interaction are considered as the main inputs of our approach. Visual perception extracts, from clouds of points, a set of clusters. Clusters are then tracked to generate object hypotheses to interact with. A motivational system is in charge of selecting objects and actions that will be applied on them. Proprioceptive feedback is retrieved in the form of joint and force measurements. Effect detectors analyze the input from perception and action tasks to extract salient changes. Sensorimotor learning is the intersection between the two input processes and represents the fusion between the perception and action components. Affordances learning finds the correlations that build the final sensorimotor representation by relating objects, actions and induced salient changes considered as effects. A long-term storage is used to save the final representation and to provide a feedback for the motivational system. [] Fig. 1. Architecture of the proposed sensorimotor approach for affordance learning. 3.1 Visual Perception In order to interact with the environment, a segmentation process is performed to identify the objects in the scene. Voxel Cloud Connectivity Segmentation (VCCS), presented in [11], benefits from 3D geometry provided by RGB-D cameras to generate an even distribution of supervoxels in the observed space. The seeding methodology to find the supervoxels is based on 3D space and a flow-constrained local iterative clustering which uses color and geometric features. As this algorithm relies on strict partial connectivity between voxels, it guarantees that supervoxels cannot flow across boundaries which are disjoint in 3D space. Once we obtained the oversegmentation from supervoxels extraction, we implemented the non-parametric clustering detailed in [12] to find the shape of the object hypotheses based on the set of supervoxels. 3.2 Affordances for Sensorimotor Representation We consider an agent (robot) endowed with a set of innate actions A, and a set of innate feature extractors [$$\mathcal {P}$$], that can be augmented through learning. In addition, O is the set of all the objects in the environment, and E is the set of all the possible observable effects. When the agent applies action [$$a \in A$$] to an entity (object) [$$o \in O$$] in the environment, a salient change (effect) [$$e \in E$$] is generated, we call this acquired relation an affordance [2]. From the agent’s perspective, a resulting affordance is defined as follows: [$$\begin{aligned} \alpha ^{\text {agent}} =(e, (o, a)), \text { for } e \in E,\ o \in O, \text { and } a \in A \end{aligned}$$] (1) more generally, this agent will gradually build a set of affordances [$${\textit{Aff}}$$] composed of the affordances [$$\alpha \_i$$]: [$$\begin{aligned} \alpha \_{i}^{\text {agent}} =(e\_{j}, (o\_{k}, a\_{l})), \text { for } e\_{j} \in E,\ o\_{k} \in O, \text { and } a\_{l} \in A \end{aligned}$$] (2) An object [$$o\_k$$] is defined as the set of values for the n innate properties extractors [$$\rho \in \mathcal {P}$$]: [$$\begin{aligned} o\_k= \{\rho \_{1}(cluster), \rho \_{2}(cluster), ... \rho \_{n}(cluster)\}, \end{aligned}$$] (3) where cluster represents the object hypothesis obtained by the visual perception module. Actions are a set of motor capabilities [$$A = \{a\_1,...,a\_m\}$$], defined as: [$$\begin{aligned} a\_k(V^\*, \gamma , \sigma \_{a\_k}), \end{aligned}$$] (4) being [$$V^\*$$] the desired value for the robot control variables V, [$$\gamma $$] its proprioceptive feedback and [$$\sigma \_{a\_k}$$] the particular action parameters. Effects are a set of salient changes in the world [$$\omega $$] detected by robot’s innate detectors e: [$$\begin{aligned} E=\{e\_{1}(\omega ), e\_{2}(\omega ), ..., e\_{q}(\omega )\} \end{aligned}$$] (5) which means that effects can be related to objects and agents, allowing to detect exteroceptive and proprioceptive changes. 3.3 Affordance Learning Considering the statistical nature of acquiring affordances through environment exploration, elements E, O and A in (1) can be represented as random variables in a Bayesian Network (BN) [$$\mathcal {G}$$]. Through the cycle of perception-interaction we obtain instances of these variables generating a data set [$$\mathcal {D}$$]. The problem of discovering the relations between E, O and A can be seen as finding dependencies between the variables in [$$\mathcal {G}$$], i.e., learning the structure of the corresponding BN from data [$$\mathcal {D}$$]. Using the BN framework we are capable of displaying relationships between variables. The directed nature of its structure allows us to represent cause-effects relationships and to combine the action and perception components in a stochastic sensorimotor representation. We implement a score-based maximization approach for finding the BN structure from [$$\mathcal {D}$$] [13]. The score of a BN structure [$$\mathcal {G}$$] is defined as the posterior probability given the data [$$\mathcal {D}$$], i.e. [$$\mathcal {S}(\mathcal {G},\mathcal {D}) = P (\mathcal {G}|\mathcal {D})$$], we define [$$\mathcal {S}$$] as the compression rate of the data [$$\mathcal {D}$$] with an optimal code induced by the BN. As the number of independent and identical distributed random variables tends to infinity, no compression of data is possible for a rate less than the Shannon entropy [18]. Information scoring functions for structure learning are based on compression [18]. The score of a BN [$$\mathcal {G}$$] is related to the compression that can be achieved over the data [$$\mathcal {D}$$] with an optimal code induced by [$$\mathcal {G}$$]. The quality of a BN can be computed by: [$$\begin{aligned} \mathcal {S} (\mathcal {G}|\mathcal {D})= \mathcal {S}\_{log-l}(\mathcal {G}|\mathcal {D}) - f(N)|\mathcal {G}| \end{aligned}$$] (6) where the log-likelihood score [$$\mathcal {S}\_{log-l}$$] tendd to favor complete network structures, without providing reliable independence assumptions for the learned network [14]. [$$|\mathcal {G}|$$] denotes the network complexity, i.e. the number of parameters in the network. f(N) is a non-negative penalization function. If [$$f(N) = 1$$], (6) becomes the AIC (Akaike Information Criterion) score; if [$$f(N) = \frac{log(N)}{2}$$], (6) represents the BIC (Bayesian Information Criterion) score [14]. Bayesian inference in our discrete BN provides the probability that an affordance [$$\alpha \_i$$] is present. However, it does not provide a mechanism to quantify the affordance w.r.t. the specific environment situations that triggered it. We believe that by preserving the continuous aspect of the elements in the affordance (1), we also maintain the necessary information for an affordance quantifying approach, i.e., Bayesian inference over a Gaussian BN (GBN). Relations in (2) can be represented as a multivariate normal distribution of continuous random variables, i.e., the affordance’s elements. Continuous variables are modeled as linear regressions in a Gaussian BN, where the relevant parameters of each local distribution are the regression coefficients (for each variable parent) and the standard deviation of the residuals. Structure learning is performed by identifying vanishing regression coefficients using two assumptions, event equivalence and parameter modularity, that allow the construction of prior distributions for our multivariate normal parameters [15]. As in learning of discrete BN structures, we implemented a score function to evaluate the quality of the continous BN. To do so, we used a score equivalent to the Gaussian posterior density, which follows a Wishart distribution and is at the core of the belief networks framework [17]. 3.4 Sensorimotor Learning Results Our Baxter experimental platform (Fig. 2) is equipped with 2 arms with 7 degrees of freedom. One electrical gripper is attached to each arm. For visual perception, we use a Microsoft Kinect sensor that captures RGB-D data. For environment interaction we use the left arm and its respective gripper. [] Fig. 2. The Baxter robotics platform used for our experiments. The RGB-D camera used for perception is visible in the foreground. In order to evaluate the generalization capabilities of the bayesian model learned with our architecture, we implemented a discrete structure learning for an experiment composed of: several objects represented by dominant color ([$$\rho \_{color}$$]), size ([$$\rho \_{size}$$]) and shape ([$$\rho \_{shape}$$]); four innate actions: poke ([$$a\_{poke}$$]), push ([$$a\_{poke}$$]), open gripper ([$$a\_{open\\_g}$$]) and close gripper ([$$a\_{close\\_g}$$]); and three types of effect detectors: feedback force in the end effector ([$$e\_{f}$$]), distance between the gripper fingers ([$$e\_{gr\\_d}$$]), and one movement detector for each detected object ([$$e\_{mv\_{o\_i}}$$]). We developed a hill-climbing based algorithm to search the optimal structure [13]. Two score functions were implemented: BIC and AIC as described in Sect. 3.3. In both cases, the learned BN structure is increasing the quality of representation for the data when more interactions are made by the agent. In addition, the resulting BN can better generalize the learned knowledge for future interactions. Although the log-likelihood loss for both information-based scores seems to be similar, the learned network from AIC score is less complex than the one from BIC score, which influences the performance of BN inference afterwards. Applying inference over the learned BN, the robot can estimate probability distributions for effect prediction P(E|O, A), feedback in action selection P(A|O, E) or object recognition given its behavioral description P(O|A, E). Using our proposed architecture (Fig. 1), we defined our experiment on the continuous framework as follows. The relevant object properties are the dominant color, visible area as size and object elliptical eccentricity as shape. Action grasp is defined as grasp(pos), where pos is a parameter describing the position (w.r.t. to object’s longitudinal component) where grasp is performed. The relevant effect [$$gripper\\_state$$] varies over the distance between gripper’s fingers. [] Fig. 3. Learned Gaussian BN. All nodes are defined as continuous random variables. Figure 3 shows the Gaussian BN learned by our architecture. Interactions were done on two different objects: a blue bar and a red bat of baseball. The learned GBN models grasp-ability as present in both objects, but only the blue bar is graspable for every configuration of the action. Grasp-ability on the red bat disappears after passing the half of its length. A video demonstration can be found at https://​cloud.​isir.​upmc.​fr/​owncloud/​index.​php/​s/​9EKUq05D58Wiyfo. 4 Experiments with Affordances of Composite Objects The goal of our experiments is to identify a formalism that could infer the affordances of composite objects, based on prior knowledge about the affordances of the primary objects that constitute them. We state that Bayesian networks, through structure learning, can not only discover affordances, but also capture their quantitative aspect, by employing continuous variables in the representation of actions and effects, that are represented as continuous variables. The experiments will help us demonstrate this. Our experimental procedure is composed of four steps: (1) performing a certain action with a set of objects (separate and composite) and observing the effects, (2) defining the random variables corresponding to the observed objects, actions, and effects inside the Bayesian network, (3) feeding the interaction data to the structure learning algorithm of the Bayesian network, and (4) interpreting the structure of the Bayesian network that best fits the recorded data according to calculations. First, we consider the discovery of affordances. From our experiments, we can interpret the model learned by the discrete Bayesian Network as a qualitative aspect of an affordance, regarding the presence or absence of a relation between the elements of an affordance (e.g. an object is push-able, i.e. it goes a certain distance from its original location). We further attempt to attach a quantitative dimension to the learned affordance by representing its elements as continuous random variables. This allows not only to predict that the affordance is present, but also to infer the parameter values of its elements that influence this affordance (e.g. the effect of the push action on the object is a function of the action’s input parameters). Three experiments are analysed in this section, all related to the inference of affordances of composite objects: (1) affordance acquisition, (2) affordance maintenance, and (3) affordance loss. Since our experiments focused on the composition of objects, we performed them on objects specifically designed for that: toys that can assemble and disassemble. These experiments are detailed in the following sections, and are illustrated in Table 1, which shows the objects employed, as well as their composition method. Table 1. The objects used in the experiments, and the employed composition order. [] 4.1 Affordance Acquisition Following the experiment description from Table 1, column Experiment 1, each object is described by two elements: one describing the number of atomic perceptual properties that forms it, and the other the position of the atomic property inside it (top or bottom). These properties allow to represent atomic objects (with only one property) and possible composite objects. In this scenario, we have two atomic perceptual properties wheel and cartFrame and together they can combine to form a cart. The robot performed random interactions with the action [$$a\_{poke}$$] and the atomic objects wheel, cartFrame and with the composite object cart (50 interactions with each object). The effect detector developed was based on the distance that an object moves after the action is executed. We use Gaussian random variables to represent the perceptual properties and the distance effect. We use nominal variables to represent the action undertaken (poke, no action), and the objects employed. The object composition was represented using 2 variables: objectBottom and objectTop, representing respectively the atomic object at the bottom of the composite object, and the one on top. Figure 5 shows the resulting network after the learning process. We can notice that the parameters influencing the distance, over which an object travels after an interaction, are correctly inferred. The action poke influences this distance, while the action noAction does not. The object at the bottom also influences this distance: wheels roll further than the cartFrame after poking. The object at the top is also linked to the distance variable, since the distance travelled by the cart (i.e. wheels with cartFrame on top) differs from the one travelled by the individual wheels. Let us use the relationships learned in this example, to infer the affordances of a similar composite object. 4.2 Affordance Maintenance and Loss The second and third experiments consist in learning the correct structure of the Bayesian network, so as to correctly predict the maintenance or loss of affordances of atomic objects that form the composite object. In this example, we will consider two new objects: the cart, and the blockLoad that we can put on or under the cart (see Experiments 2 and 3 in Table 1). We feed the BN the data obtained in the interactions with these new atomic objects (50 interactions with each object), but not for their composition, and obtain the BN seen in Fig. 4. [] Fig. 4. The Bayesian Network obtained after feeding the interaction data with the atomic objects cart and blockLoad. [] Fig. 5. Conditional linear Gaussian network obtained after learning process. We stated the acquired nature of an affordance in Eq. 2, and for this reason, the inferred affordance will be considered an estimation until the robot, by interaction, validates it. If we represent the composite object [$$obj\_{composite} = $$] [$$\{objectBottom = cart, \ objectTop = blockload\}$$], we can obtain an estimation of its affordance by calculating [$$P(distance|objectBottom = cart, objectTop = blockload)$$] from the learned BN. In our experiment, the probability distribution for this calculation is similar to [$$P(distance|objectBottom = cart)$$] showing experimentally that the estimated affordance movable of the composite object is similar to the affordance of one of its elements. In the BN represented in Fig. 4, if the value of the objectBottom variable is known, the variables distance and the objectTop are conditionally independent. This means that the upper part of a composite object does not influence the distance that this composite object traverses after a poke action. This can be interpreted as an affordance loss. On the other hand, the bottom part of a composite object (i.e. objectBottom) does impact the distance it traverses after a poke, suggesting that its affordance is maintained. This is confirmed experimentally: after a poke, the atomic objects cart and blockLoad travel an average distance of 45 cm and 9 cm, respectively. The composite object with the cart at the bottom travels an average distance of 28.4 cm, while the one with the blockLoad at the bottom travels an average distance of only 3.8 cm. 4.3 Discussion on the Results The goal of our ongoing experiments is to infer the relations that exist between affordances, which would allow to refine the definition and formalization of an affordance. By decomposing an object offering a specific affordance into its constituent parts, we may wonder what are the affordances of the obtained parts. Answering this question requires us to introduce a mathematical operator, which would be able to estimate the affordances of an object obtained through the decomposition of an object, or through the composition of objects. It is yet unclear if this mathematical operator would apply to the objects and their properties (identifying their affordances as a consequence), or if it would apply to the entire affordance relation (E, (O, A)). This opens a whole new domain of inquiry about the relations between affordances. We designed three experiments in order to test our hypothesis. Following (2), we can define a particular affordance for an object [$$o\_i$$] as [$$\alpha \_{i}= (e\_{k}, (o\_{i}, a\_{l}))$$]. Then, we can decompose [$$o\_i$$] into two new objects [$$o\_{i}'$$] and [$$o\_{i}''$$], by dissecting its property set in 2 complementary parts ([$$\varrho \_{1}, \varrho \_{2} \subset o\_i$$], [$$\varrho \_{1} \cup \varrho \_{2} = o\_i$$], [$$\varrho \_{1} \cap \varrho \_{2} = \emptyset $$]), one for each object, and padding them with null values: [$$\begin{aligned} \begin{aligned} o\_{i}' = \{\rho \_{x} | \rho \_{x} \in \varrho \_{1} \} \cup \{\rho \_{y}=null | \rho \_{y} \in \varrho \_{2}\}, \\ o\_{i}'' = \{\rho \_{x} | \rho \_{x} \in \varrho \_{2} \} \cup \{\rho \_{y}=null | \rho \_{y} \in \varrho \_{1} \}. \end{aligned} \end{aligned}$$] (7) Using the learned model from our proposed architecture, we can infer: [$$\begin{aligned} \begin{aligned} \alpha \_{i}' = (e\_{k}, (o\_{i}', a\_{l})), \text { } \alpha \_{i}'' = (e\_{k}, (o\_{i}'', a\_{l})). \end{aligned} \end{aligned}$$] (8) If the removal of a property does not influence the affordance of an object ([$$\alpha \_{i}' \equiv \alpha \_{i}$$]), then this property can be considered as non salient for this particular affordance. In addition, if we can rewrite [$$\alpha \_{i}$$] as: [$$\begin{aligned} \alpha \_{i} = (e\_{1}, (o\_{1}' \otimes o\_{1}'',a\_{1})), \end{aligned}$$] (9) the computation defined by the operator [$$\otimes $$] suggests the existence of a combination of affordances. Experiments can help to discover the properties of this composition operator. Let us represent the set of salient features from objects [$$o\_x$$] and [$$o\_y$$] for one of their affordances as [$$salient\_{\alpha \_i}(o\_x)$$] and [$$salient\_{\alpha \_j}(o\_y)$$] respectively, where [$$\begin{aligned} \begin{aligned} \alpha \_{i} = (e\_{kx}, (o\_{x}, a\_{lx})), \text { } \alpha \_{j} = (e\_{ky}, (o\_{y}, a\_{ly})). \end{aligned} \end{aligned}$$] (10) If [$$o\_x$$] and [$$o\_y$$] do not share salient features, [$$salient\_{\alpha \_i}(o\_x) \cap salient\_{\alpha \_j}(o\_y) = \emptyset $$], and [$$ |salient\_{\alpha \_i}(o\_x)|+|salient\_{\alpha \_j}(o\_y)|=n$$], we can construct a new object [$$o\_{xy}$$] by selectively combining the salient properties of [$$o\_{x}$$] and [$$o\_{y}$$], [$$\begin{aligned} o\_{xy} = salient\_{\alpha \_i}(o\_x) \cup salient\_{\alpha \_j}(o\_y), \end{aligned}$$] (11) which by definition should retain affordances [$$\alpha \_{i}$$] and [$$\alpha \_{j}$$]. We can empirically discover the properties of affordances of this new object [$$o\_{xy}$$] w.r.t. the properties of [$$o\_x$$] and [$$o\_y$$]. Through these experiments (decomposition, composition and selective composition) we will be able to estimate the affordances of combined or de-composed objects, and verify this estimation empirically, shedding light on the nature of these affordance operators. 5 Conclusion and Future Work We introduced a Bayesian architecture for learning sensorimotor representations from the interaction between objects, robot actions, and the generated effects. In particular, it employs Gaussian random variables that capture the quantitative aspect of actions and effects. We introduce the concept of primary objects to capture prior knowledge on their affordances. We also introduce the concept of composite objects, for which we want to identify a relationship between the objects they are composed of, and the way they are assembled, in order automatically infer their affordances. We performed experiments to infer the affordances of composite objects, based on prior knowledge about the affordances of the primary objects that constitute them. Results form the learned Bayesian network showed information regarding the acquisition, maintenance, and loss of affordances by the employed primary objects, depending on their position in the composite object. The obtained results suggest that it may be possible to define an operator acting on the elements of affordances, which could predict the affordances of new objects, obtained through the combination of known objects. In our future work, we plan to identify the salient features of objects that endow these objects with specific affordances. These salient features can be identified while performing object composition. In this case, a gained affordance would be related to features acquired after object composition. Salient features can also be identified during object decomposition. In this case, a lost affordance would be related to features lost after object decomposition into constituent parts. Although our approach is a statistically based learning technique, it would be interesting to analyse other approaches that could provide statistically similar results or improvement with fewer interactions. It would be interesting to employ algorithms that can identify causal relationships between actions, object features and effects with as few observations as possible (one or two). Acknowledgments This work has been funded by a DGA (French National Defense Agency) scholarship (ER), and by French Agence Nationale de la Recherche ROBOERGOSUM project under reference ANR-12-CORD-0030. References 1. Gibson, J.: The theory of affordances. Perceiving Act. Know. Toward Ecol. Psychol. 67–82 (1977) 2. Sahin, E., Cakmak, M., Dogar, M., et al.: To afford or not to afford: a new formalization of affordances toward affordance-based robot control. Adaptive Behav. 15, 447–472 (2007)CrossRef 3. Montesano, L., et al.: Learning object affordances: from sensory–motor coordination to imitation. IEEE Trans. Robot. 24(1), 15–26 (2008) 4. Stoytchev, A.: Learning the affordances of tools using a behavior-grounded approach. In: Rome, E., Hertzberg, J., Dorffner, G. (eds.) Towards Affordance-Based Robot Control. LNCS (LNAI), vol. 4760, pp. 140–158. Springer, Heidelberg (2008). doi:10.​1007/​978-3-540-77915-5\_​10 CrossRef 5. Ugur, E., Sahin, E.: Traversability: a case study for learning and perceiving affordances in robots. Adaptive Behav. 18(3–4), 258–284 (2010)CrossRef 6. Hermans, T., Rehg, J.M., Bobick, A.: Affordance prediction via learned object attributes. In: IEEE ICRA, Workshop on Semantic Perception, Mapping, and Exploration (2011) 7. Jain, R., Inamura, T.: Bayesian learning of tool affordances based on generalization of functional feature to estimate effects of unseen tools. Artif. Life Robot. 18, 95–103 (2013). Springer 8. Moldovan, B., Moreno, P., van Otterlo, M.: On the use of probabilistic relational affordance models for sequential manipulation tasks in robotics. In: IEEE ICRA, pp. 1290–1295 (2013) 9. Zhu, Y., Yibiao, Z., Song, C.Z.: Understanding tools: task-oriented object modeling, learning and recognition. In: Proceedings of IEEE CVPR (2015) 10. Ugur, E., Nagai, Y., Sahin, E., Oztop, E.: Staged development of robot skills: behavior formation, affordance learning and imitation with motionese. IEEE Trans. Auton. Ment. Dev. 7, 119–139 (2015)CrossRef 11. Papon, J., Abramov, A., Schoeler, M., et al.: Voxel cloud connectivity segmentation - supervoxels for point clouds. In: Proceedings of IEEE CVPR, pp. 2027–2034 (2013) 12. Comaniciu, D., Meer, P., Member, S.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603619 (2002)CrossRef 13. Tsamardinos, I., Brown, L.E., Aliferis, C.F.: The max-min hill-climbing Bayesian network structure learning algorithm. Mach. Learn. 65(1), 31–78 (2006)CrossRef 14. Koski, T.J.T., Noble, J.M.: A review of Bayesian networks and structure learning. Mathematica Applicanda 40(1), 53–103 (2012)MathSciNetMATH 15. Geiger, D., Heckerman, D.: Learning Gaussian networks. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 235–243 (1994) 16. Dehban, A., Jamone, L., Kampff, A.R., Santos-Victor, J.: Denoising auto-encoders for learning of objects and tools affordances in continuous space. In: IEEE ICRA (2016) 17. Geiger, D., Heckerman, D.: Learning Gaussian networks. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 235–243 (1994) 18. Chiu, E., Lin, J., Mcferron, B., Petigara, N., Seshasai, S.: Mathematical Theory of Claude Shannon (2001) 19. Harnad, S.: The symbol grounding problem. Physica D Nonlinear Phenom. 42(1-3), 335–346 (1990) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_60 Experimental Evaluation of Hybrid Conditional Planning for Service Robotics Ahmed Nouman¹  , Ibrahim Faruk Yalciner¹  , Esra Erdem¹   and Volkan Patoglu¹   (1) Sabancı University, Istanbul, Turkey     Ahmed Nouman Email: ahmednouman@sabanciuniv.edu URL: http://cogrobo.sabanciuniv.edu   Ibrahim Faruk Yalciner Email: fyalciner@sabanciuniv.edu URL: http://cogrobo.sabanciuniv.edu   Esra Erdem Email: esraerdem@sabanciuniv.edu URL: http://cogrobo.sabanciuniv.edu   Volkan Patoglu (Corresponding author) Email: vpatoglu@sabanciuniv.edu URL: http://cogrobo.sabanciuniv.edu Abstract Conditional planning enables planning for the sensing actions and their possible outcomes in addition to actuation actions, and allows for addressing uncertainties due to partial observability at the time of offline planning. Therefore, the plans (called conditional plans) computed by conditional planners can be viewed as trees of (deterministic) actuation actions and (nondeterministic) sensing actions. Hybrid conditional planning extends conditional planning further by integrating low-level feasibility checks into executability conditions of actuation actions in conditional plans. We introduce a novel hybrid conditional planning method, which extends hybrid sequential planning with nondeterministic sensing actions and utilizes this extension to compute branches of a conditional plan in parallel. We evaluate this method in a service robotics domain, by means of a set of experiments over dynamic simulations, from the perspectives of computational efficiency and plan quality. Keywords Conditional planningMotion planningTask planning 1 Introduction While robots execute plans in dynamic environments, discrepancies might occur between expected states and observed states due to possible uncertainties that were not or could not be considered during planning. The robot may recover from these discrepancies by replanning. To reduce the number of replanning during execution, instead of finding sequential plans, conditional plans can be computed. Indeed, conditional planning is concerned with planning the actuation actions of robots to achieve their goals in the presence of incomplete information and sensing actions [1, 2]. Conditional plans can be viewed as trees of (deterministic) actuation actions and (nondeterministic) sensing actions (Fig. 1), and computing such plans is one of the hardest planning problems [3, 4]. Even for polynomially bounded plans with limited number of sensing actions, its complexity is [$$\varSigma ^P\_2$$]-complete [5]. [] Fig. 1. A Sample Conditional Plan Despite such a high complexity, we can see a variety of online conditional planners, such as CLG [6], K-Planner [7], SDR [8] and HCP [9], and offline conditional planners, such as Contingent-FF [10], POND [11] and PKS [12, 13]. Online computation of conditional plans is generally easier than offline conditional planning because integrating online sensing with planning eliminates the need to plan for a potentially exponential number of contingencies; on the other hand, the goal may not be reached even if it is possible. Offline computation constructs conditional plans with decision points for sensing outcomes, and guarantees that the goal is reached if possible; the plan is more general since it can deal with alternative sensing outcomes. Some of the offline conditional planning approaches (e.g., CLG) are compilation-based: they decide for the order of sensing actions and then compute the branches of the conditional by means of conformant planning. These approaches are shown to be more efficient [6] than the search-based approaches that view conditional planning as a nondeterministic search problem in belief space (e.g., Contingent-FF, PKS, POND). Motivated by robotics applications where planning is done under partial observability, we introduce a novel generic hybrid conditional planning method with an offline compilation-based approach. Our approach is different from other offline compilation-based approaches since it is hybrid (so low-level feasibility checks are integrated into conditional planning), and it models sensing actions as nondeterministic actions (so ordering of sensing actions and solving conformant planning problems are not needed as in related work). We evaluate the usefulness of our approach with a set of experiments over a service robotics domain, from the perspectives of computational efficiency and plan quality. 2 Computing a Hybrid Conditional Plan Our method computes a hybrid conditional plan incrementally by computing branches of the tree in parallel. Each branch of the tree is computed with a hybrid sequential planning approach embedded with sensing actions. 2.1 Computing a Branch of a Hybrid Conditional Plan Tree To compute a branch of a tree that characterizes a conditional plan, we should be able to compute a sequential plan of actuation actions and sensing actions. For that, we start with representing these two sorts of actions in the action description language [$$\mathcal{C}+$$] [14]. Describing actuation actions. We suppose that the actuation actions are deterministic. We represent the preconditions and effects of actuation actions by formulas. For instance, the effect of the action placeOn(L1) of the robot placing the object that it is holding onto a location L1 can be described as follows: [$$ { placeOn(L\textit{1}) \ {\mathbf{causes\ }} objAt(T\textit{1})=L\textit{1} {\mathbf{\ if\ }} objAt(T\textit{1})=hand } $$] which expresses that, after the robot places the object T1 that it holds onto location L1, the location of the object T1 becomes L1. A precondition of this action, that the robot has to be near L1, can be expressed by the formula [$${{\mathbf{nonexecutable\ }} placeOn(L\textit{1}) {\mathbf{\ if\ }} robAt \ne L\textit{1} }.$$] External computations (e.g., feasibility checks) can also be embedded in the formulas for hybrid planning in the spirit of [15]. For instance, the formula [$$\begin{array}l {{\mathbf{nonexecutable\ }} placeOn(table) {\mathbf{\ if\ }} objAt(T\textit{1})=hand, robAt=R\textit{1} } \\ \quad {\mathbf{\ where\ }} isNotPlaceable(T\textit{1},R\textit{1}) \end{array}$$] expresses that the robot can place the object on the table if the desired location of the object on the table is reachable by the robot. The low-level checks, like reachability check, are performed externally by a program: the robot samples candidate goal locations in the relevant part of the table and, for each candidate location, it checks for the existence of a collision-free trajectory. As a side effect of this computation it also computes a feasible pose of the robot around the table. The result of this external computation is extracted by the atom isNotPlaceable(T1, R1) and embedded into the precondition of placeOn(L1). Describing sensing actions. Now, let us consider some properties of the domain that are not fully observable. For instance, cleanliness of all objects are not fully observable; so the robot does not know about all of them. To learn about the cleanliness of an object, the robot has to inspect the object; but this may only be possible during the execution of a plan when the robot can go to and manipulate the object. Therefore, representing sensing actions is a bit more challenging: a sensing action changes knowledge state (instead of a world state) and a sensing action has nondeterministic outcomes. We introduce a novel method for representing sensing actions. First, we define the preconditions of sensing actions. For instance, the robot can check for the cleanliness of an object if is holding the object closely. This is expressed by the formula: [$$ {{\mathbf{nonexecutable\ }} checkisClean(T\textit{1}) {\mathbf{\ if\ }} objAt(T\textit{1})\ne hand }. $$] where checkisClean(T1) is the sensing action of checking for the cleanliness of the object T1. As a direct effect of a sensing action, we know that the relevant sensing checks are performed. Otherwise, we assume that by default the sensing checks are not already performed. For instance, by default, the cleanliness of every object T1 is not already checked: [$$ {{\mathbf{default\ }} checkedClean(T\textit{1}) = no }. $$] The cleanliness of object T1 is checked after the robot performs the relevant sensing action checkisClean(T1): [$$ { checkisClean(T\textit{1}) \ {\mathbf{causes\ }} checkedClean(T\textit{1})=yes {\mathbf{\ if\ }} objAt(T\textit{1})=hand }. $$] Nondeterministic outcomes of a sensing action are described as indirect effects utilizing external atoms. Consider, for instance, the sensing action labeled checkisClean(T1). Its possible outcomes are that the object T1 is clean or not. One of these outcomes can be generated nondeterministically by a program whose output is denoted by the atom outcomeCheckClean(T1, C1); here C1 can be yes or no. Then, we can describe that the sensing action checkisClean has nondeterministic outcomes, by the formula [$$\begin{array}l {{\mathbf{caused\ }} isClean(T\textit{1})=C\textit{1} {\mathbf{\ if\ }} checkedClean(T\textit{1}) } \\ \quad {\mathbf{\ where\ }} outcomeCheckClean(T\textit{1},C\textit{1}) \end{array}$$] where C1 is yes or no. Note that [$$isClean(T1)=C1$$] is described as a ramification of checkisClean(T1) (via its direct effect checkedClean(T1)). Computing a branch. Once we describe the actuation actions and the sensing actions, we can describe a planning problem with its initial state and goals and ask for a plan of length k. For instance, the following formula describes what is known and what is unknown initially at step 0, and what is desired in a goal state at step k: [$$ \begin{array}l robAt=tableLeft\ \mathbf{holds\, at }\ 0\ \wedge \\ objAt(waterGlass)=shelfA\ \mathbf{holds\, at }\ 0\ \wedge \\ isClean(plate)=yes\ \mathbf{holds\, at }\ 0\ \wedge ... \wedge \\ objAt(plate)=unknown\ \mathbf{holds\, at }\ 0\ \wedge \\ isClean(waterGlass)=unknown\ \mathbf{holds\, at }\ 0 \wedge ... \wedge \\ tableSet\ \mathbf{holds\, at }\ k . \end{array}$$] Now, using the hybrid planner CCalc [16] with the robotic action domain description and the planning problem, we can compute a hybrid sequential plan of length k, which consists of actuation and sensing actions. By iterating over [$$k=0,1,...,K$$] for a large enough K, we can find a shortest hybrid sequential plan. Along with such a plan P, CCalc also computes its history H, which involves information about the intermediate states (i.e., H[i] is the state where the i’th action P[i] of the plan is executed). 2.2 Parallel Computation of All Branches of a Hybrid Conditional Plan Tree Our algorithm first computes one branch B of the tree with the root root (if exists), which characterizes a shortest hybrid sequential plan P from an initial state to a goal state and a history H of the plan. Each sensing node A of B with a depth i denotes the sensing action P[i], and the label of the outgoing edge from A in B denotes a sensing outcome o of A. Then note that the state at which the next action in B is executed is obtained from H[i] (at which A is executed) by updating the state information with respect to the sensing outcome o. Next, based on this observation, for every sensing node in B with depth i and for every other sensing outcome o of P[i], our algorithm extracts the state information H[i] (at which A is executed), obtains the new state CS from H[i] with respect to the new outcome o, and constructs a hybrid conditional plan from this CS to a goal state. Here, all these hybrid conditional plans for all the sensing outcomes of A are computed in parallel. In the end, if the computed hybrid conditional plan has a maximum branching factor b and the maximum depth n, the conditional plan has at most [$$b^d$$] leaves. Therefore, our hybrid conditional planner calls the hybrid sequential planner CCalc at most [$$b^d$$] times to compute the branches of the conditional plan. Our method is generic and applicable to any robotic action domain with actuation actions and sensing actions. [] Fig. 2. The robot is manipulating a bowl from Cabinet B in dynamic simulation (top) and a fork during physical implementation (bottom). 3 Case Study: Kitchen Table Setting To demonstrate the feasibility of our approach for complex robotics domains, we consider a dynamic service robotics scenario, where a bimanual mobile manipulator is responsible for setting up a kitchen table, as depicted in Fig. 2. The mobile manipulator can navigate around the kitchen to pick up and place objects as long as collision free trajectories exist. Kitchenware, such as mugs, spoons, knives, plates may be found in cabinets or may be left on other flat surfaces, such as counter tops or shelves. In the kitchen, there also exists a faucet to clean kitchenware as required. Finally, there is a kitchen table, where the proper kitchenware must be placed on to comply with table setting etiquette. For the table set-up scenario, four actuation actions are considered in our domain: goto, pickup, placeon and clean. Note that, in hybrid planning the feasibility of these actions needs to be checked. A probabilistic motion planner (based on OMPL [17]) is used to implement the precondition of goto action, while reachability, graspability and inverse kinematics checks (based on OpenRAVE [18]) are implemented as preconditions of pickup and placeon actions. Note that the environment is not completely observable during planning. Three types of possible sources of uncertainties are considered in this domain. First, the person might have different food preferences (e.g., soup, pizza, salad), which can only be revealed when directly communicated with the user during plan execution; that is, this information is not available for planning ahead of time. This uncertainty directly affects the plan, as the kitchenware to be placed on table varies based on the type of the meal (e.g., spoon and bowl are required for having soup, while these kitchenware are irrelevant for eating pizza). Second, the locations of kitchenware are uncertain and might not be known by the robot during the planning phase. These locations can be reliably gathered only if the robot actively searches for these objects when it needs to use them. Third, the cleanliness/dirtiness of the objects may not be known for sure. Along these lines, three sensing actions and information gathering are considered in our domain: checkFoodType, checkLoc and checkisClean. The first action checkFoodType is used to determine the type of food the user desires. During the offline planning phase, its outcome is nondeterministically determined. The sensing action checkLoc is utilized to resolve the uncertainty of the locations of kitchenware. During the offline planning phase, the locations of objects are assigned nondeterministically. Finally, checkisClean is introduced to determine cleanliness of kitchenware. Like other sensing actions, during the offline planning phase, its value is defined nondeterministically. 4 Experimental Evaluation To evaluate our hybrid conditional planner, we consider 12 different scenarios in the kitchen table setting domain. These test scenarios are developed by varying initial setting, goal settings and possible uncertainty about the environment. The scenarios vary from simple ones with less partial observability to complex ones where the robot is very uncertain about its initial state. All experiments are performed on a Linux server with 32 2.4 GHz Intel E5-2665 CPU cores and 64 GB memory. For large problem instances, all 32 cores are utilized, while the experiments never require more than 3.5 GB memory. 4.1 Hybrid Conditional Planning We evaluate the scalability of our approach to hybrid conditional planning over the 12 instances. The results are shown in Table 1. For each instance, the total number L of leaves in the tree (i.e., the number of different hybrid sequential plans from an initial state to a goal state), the maximum length [$$D\_{max}$$] of a branch from the root to a leaf (i.e., the maximum length of a hybrid sequential plan that can be executed by the robot) and the number A of actuation actions and the number S of sensing actions in that branch, the total number DN of decision nodes that denote sensing actions, the maximum branching factor BF (i.e., the maximum number of sensory outcomes), and the total number N of nodes in the tree (i.e., the size of the tree) are reported. Considering that the intractability of conditional planning is beyond NP-hard, as expected, both the size of the hybrid conditional plan and the computation time increase as the problems require longer and more branches. For Instance 12, a hybrid conditional plan including 33442 actions is computed in 287.57 s. Note that the hybrid conditional plan for Instance 12 represents 3909 different ways of reaching a goal under partial observability. Therefore, about 0.07 s is spent for each hybrid sequential plan of average length 39. Table 1. Hybrid conditional planning +-------+-----------------------------+--------------------------+------+------+-------+----------+ | Scen. | [$$D\_{max}$$] [$$(A{+}S)$$] | [$$BF\_{max} (BF\_{av})$$] | L | DN | N | Time [s] | +:======+:============================+:=========================+:=====+:=====+:======+:=========+ | 1 | 22 (18 + 4) | 4 (2.52) | 36 | 23 | 404 | 10.37 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 2 | 29 (25 + 4) | 4 (2.35) | 24 | 17 | 291 | 16.39 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 3 | 32 (27 + 5) | 4 (2.16) | 52 | 44 | 527 | 21.24 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 4 | 33 (28 + 5) | 4 (2.5) | 112 | 74 | 1266 | 24.01 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 5 | 36 (30 + 6) | 4 (2.29) | 144 | 111 | 1455 | 41.73 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 6 | 33 (27 + 6) | 4 (2.75) | 392 | 224 | 4476 | 42.77 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 7 | 35 (29 + 6) | 4 (2.18) | 272 | 230 | 2449 | 50.67 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 8 | 33 (27 + 6) | 4 (2.33) | 400 | 299 | 3646 | 44.18 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 9 | 38 (30 + 8) | 4 (2.29) | 1790 | 1384 | 18022 | 155.88 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 10 | 38 (30 + 8) | 4 (2.29) | 1954 | 1517 | 19250 | 163.79 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 11 | 37 (29 + 8) | 4 (2.31) | 1893 | 1440 | 17562 | 150.02 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ | 12 | 39 (30 + 9) | 4 (2.11) | 3909 | 3518 | 33442 | 287.57 | +-------+-----------------------------+--------------------------+------+------+-------+----------+ 4.2 Hybrid Conditional Planning Vs Plan Execution Monitoring with Replanning An alternative approach to hybrid conditional planning is to utilize plan execution monitoring: (1) to compute a hybrid sequential plan that consists of actuation actions using a classical planner, (2) to monitor its execution by sensing observable fluents, and then (3) to compute a new plan when a discrepancy is detected between the expected and the observed values of these fluents. Since classical planners require complete information about the initial state, in our experiments, when values of some fluents are not known (due to partial observability), they are assigned nondeterministically among possible values (i.e., outcomes of relevant sensing actions). Due to this nondeterministic assignment, each instance is solved 5 times, and the averages are reported for some results. During the plan execution, the observable fluents are sensed at intervals of [$$t=3$$] and [$$t=5$$] steps, and compared to their expected values according to the plan for discrepancies. Table 2 presents results of these experiments for plan execution monitoring with replanning. The computation time of the plan, the number R of replanning attempts, and the total plan length K to reach the goal are reported for the 12 test instances, together with the optimal plan length [$$K\_{opt}$$] for each scenario. The maximum [$$(.)\_{max}$$] and the average [$$(.)\_{av}$$] values are listed in the table. Table 2. Execution monitoring with replanning ([$$t=3$$]) vs ([$$t=5$$]) +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | Scen. | [$$K\_{opt}$$] | Execution Monitoring ([$$t=3$$]) | | | Execution Monitoring ([$$t=5$$]) | | | +:======+:==============+:=================================+:============================+:=========+:=================================+:============================+:=========+ | | | [$$K\_{max} (K\_{av})$$] | [$$R\_{max}$$]([$$R\_{av}$$]) | Time [s] | [$$K\_{max} (K\_{av})$$] | [$$R\_{max}$$]([$$R\_{av}$$]) | Time [s] | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 1 | 18 | 22 (19.80) | 3 (2.00) | 5.37 | 22 (19.6) | 3 (2.2) | 6.33 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 2 | 24 | 31 (27.00) | 3 (1.60) | 7.80 | 31 (27.6) | 3 (2.0) | 8.36 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 3 | 16 | 19 (17.80) | 2 (1.00) | 3.19 | 19 (17.8) | 2 (1.4) | 3.32 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 4 | 16 | 24 (20.60) | 3 (2.20) | 4.55 | 27 (20.8) | 4 (2.0) | 4.01 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 5 | 25 | 34 (29.00) | 5 (4.00) | 12.78 | 34 (29.0) | 4 (2.2) | 14.37 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 6 | 27 | 42 (35.00) | 6 (4.40) | 16.63 | 51 (35.0) | 7 (3.6) | 19.02 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 7 | 18 | 25 (21.80) | 4 (2.60) | 6.70 | 26 (22.4) | 3 (2.0) | 9.30 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 8 | 27 | 38 (33.20) | 5 (4.00) | 15.67 | 36 (30.4) | 4 (2.6) | 17.11 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 9 | 17 | 26 (21.80) | 4 (3.20) | 5.72 | 29 (26.2) | 4 (3.4) | 6.61 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 10 | 18 | 27 (22.00) | 5 (3.20) | 5.76 | 35 (26.2) | 5 (3.4) | 7.36 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 11 | 25 | 31 (30.00) | 5 (3.60) | 15.16 | 34 (30.2) | 4 (3.2) | 18.07 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ | 12 | 24 | 34 (36.60) | 5 (4.20) | 14.25 | 41 (36.6) | 6 (4.6) | 18.50 | +-------+---------------+----------------------------------+-----------------------------+----------+----------------------------------+-----------------------------+----------+ As expected, hybrid conditional planner spends more time on planning, since it needs to compute plans for all possible outcomes. For Instance 12, hybrid conditional planning takes 287.57 s to compute the whole conditional plan whereas execution monitoring with 6 replanning attempts takes 13.36 s to compute a sequential hybrid plan. On the other hand, the total number of actuation actions to be executed with respect to the computed conditional plan (e.g., 30 actuation actions for Instance 12) is in general less than the number of actuation actions to be executed with respect to the execution monitoring approach (e.g., 41 actuation actions for Instance 12). The number of actuation actions to be executed may be more important than the total planning time for robotic applications where execution of actions takes long time. 4.3 Full vs. Partial Hybrid Conditional Plans To evaluate the effect of computing the full hybrid conditional plan tree vs. a partial hybrid conditional plan tree, we allow sensing up to some depth: we revise our hybrid conditional planning algorithm slightly by generating branches for sensing outcomes up to some depth and then continue with hybrid sequential planning without further branching after this depth. By this way, computationally expensive conditional planning is allowed only for some part of the tree closer to the root. Table 3. Results of partial conditional planning with depth thresholds 6 and 12 [TABLE] Table 3 presents the results of our experiments when sensing is allowed until depths 6 and 12. As expected, computing partial hybrid conditional plans by allowing branches up to some depth is more efficient than computing the whole tree. For Instance 12, the computation time of a hybrid conditional plan reduces from 287.5 s (without any depth constraint) to 129.02 s for depth 12 and to 86.57 s for depth 6. On the other hand, since the longest branch of the tree may be different in each case, the maximum number of actuation actions to be executed in these partial hybrid conditional plans may vary. Note that, due to missing branches, it is possible that replanning attempts may be needed during the execution of partial conditional plans compared to the execution of full hybrid conditional plans. 5 Conclusion We have introduced a novel hybrid conditional planning method, which extends hybrid sequential planning with nondeterministic sensing actions and utilizes this extension to compute branches of a conditional plan in parallel. We have applied it to a robotics applications, where a mobile service robot sets up a kitchen table under partial observability. We have empirically evaluated our algorithm over various scenarios in this domain, (i) to check its usefulness compared to execution monitoring with replanning (so no sensing action is considered while planning/replanning), and (ii) to check the effect of computing full vs partial hybrid conditional plans. In our experiments for (i), we have observed that computing a hybrid conditional plan generally leads to execution of less number of actions, while the total time of planning is larger than that of execution monitoring. In that sense, computing a hybrid conditional plan in advance may be more preferable for applications where execution of actions takes more time and/or where replanning is not desired during execution. In our experiments for (ii), we have observed that computing partial hybrid conditional plans takes less time, while some branches may be missing. Thus, this approach may require replanning. In that sense, this approach provides an intermediate solution between computing complete hybrid conditional plans and execution monitoring with replanning. Acknowledgments This work is partially supported by TUBITAK Grant 114E491 (Chist-Era COACHES). References 1. Peot, M.A., Smith, D.E.: Conditional nonlinear planning. In: Proceedings of AIPS, pp. 189–197 (1992) 2. Pryor, L., Collins, G.: Planning for contingencies: a decision-based approach. JAIR 4, 287–339 (1996) 3. Haslum, P., Jonsson, P.: Some results on the complexity of planning with incomplete information. In: Proceedings of ECP, pp. 308–318 (1999) 4. Turner, H.: Polynomial-length planning spans the polynomial hierarchy. In: Proceedings of JELIA, pp. 111–124 (2002) 5. Baral, C., Kreinovich, V., Trejo, R.: Computational complexity of planning and approximate planning in presence of incompleteness. In: Proceedings of IJCAI, pp. 948–955 (1999) 6. Albore, A., Palacios, H., Geffner, H.: A translation-based approach to contingent planning. In: Proceedings of IJCAI, pp. 1623–1628 (2009) 7. Bonet, B., Geffner, H.: Planning under partial observability by classical replanning: theory and experiments. In: Proceedings of IJCAI, pp. 1936–1941 (2011) 8. Brafman, R.I., Shani, G.: Replanning in domains with partial information and sensing actions. JAIR 45, 565–600 (2012)MathSciNetMATH 9. Maliah, S., Brafman, R.I., Karpas, E., Shani, G.: Partially observable online contingent planning using landmark heuristics. In: Proceedings of ICAPS (2014) 10. Hoffmann, J., Brafman, R.I.: Contingent planning via heuristic forward search with implicit belief states. In: Proceedings of ICAPS, pp. 71–80 (2005) 11. Bryce, D., Kambhampati, S., Smith, D.E.: Planning graph heuristics for belief space search. JAIR 26, 35–99 (2006)MATH 12. Petrick, R.P.A., Bacchus, F.: A knowledge-based approach to planning with incomplete information and sensing. In: Proceedings of AIPS, pp. 212–222 (2002) 13. Petrick, R.P.A., Bacchus, F.: Extending the knowledge-based approach to planning with incomplete information and sensing. In: Proceedings of ICAPS, pp. 2–11 (2004) 14. Giunchiglia, E., Lee, J., Lifschitz, V., McCain, N., Turner, H.: Nonmonotonic causal theories. Artif. Intell. 153, 49–104 (2004)MathSciNetCrossRefMATH 15. Erdem, E., Haspalamutgil, K., Palaz, C., Patoglu, V., Uras, T.: Combining high-level causal reasoning with low-level geometric reasoning and motion planning for robotic manipulation. In: Proceedings of ICRA (2011) 16. McCain, N., Turner, H.: Causal theories of action and change. In: Proceedings of AAAI/IAAI, pp. 460–465 (1997) 17. Sucan, I.A., Moll, M., Kavraki, L.E.: The open motion planning library. IEEE Robot. Automation Mag. 19(4), 72–82 (2012)CrossRef 18. Diankov, R.: Automated construction of robotic manipulation programs. Ph.D. thesis, Carnegie Mellon University, Robotics Institute, August 2010 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_61 Improved Learning of Dynamics Models for Control Arun Venkatraman¹  , Roberto Capobianco²  , Lerrel Pinto¹  , Martial Hebert¹  , Daniele Nardi²   and J. Andrew Bagnell¹   (1) The Robotics Institute, Carnegie Mellon University, Pittsburgh, USA (2) Sapienza University of Rome, Rome, Italy     Arun Venkatraman (Corresponding author) Email: arunvenk@cs.cmu.edu   Roberto Capobianco Email: capobianco@dis.uniroma1.it   Lerrel Pinto Email: lerrelp@cs.cmu.edu   Martial Hebert Email: hebert@cs.cmu.edu   Daniele Nardi Email: nardi@dis.uniroma1.it   J. Andrew Bagnell Email: dbagnell@cs.cmu.edu Abstract Model-based reinforcement learning (MBRL) plays an important role in developing control strategies for robotic systems. However, when dealing with complex platforms, it is difficult to model systems dynamics with analytic models. While data-driven tools offer an alternative to tackle this problem, collecting data on physical systems is non-trivial. Hence, smart solutions are required to effectively learn dynamics models with small amount of examples. In this paper we present an extension to Data As Demonstrator for handling controlled dynamics in order to improve the multiple-step prediction capabilities of the learned dynamics models. Results show the efficacy of our algorithm in developing LQR, iLQR, and open-loop trajectory-based control strategies on simulated benchmarks as well as physical robot platforms. Keywords Reinforcement learningOptimal controlDynamics learningSequential prediction 1 Motivation Learning based approaches for controlling robotic systems, or more generally autonomous agents, fall into two primary categories: model-based [1–3] and model-free [4–7]. In this work, we focus on problems belonging to the former category, where a system transition function – a dynamics model – is used to guide the creation of a control policy. To generate low cost control policies, we need dynamics models that accurately capture the evolution of the true underlying system. However, with the increasing complexity of robotic technologies, it becomes difficult to robustly characterize robot dynamics a priori with simple analytic models. To tackle this problem and to scale model-based control techniques to new systems, prior work in dynamics learning has shown promising results in the modeling of dynamics system either by augmenting physics-based models [8] or through non-parametric, black-box learning [9]. Typically, the accuracy of data-driven dynamics models depends on the amount of collected data. However, for many robotic systems, it can be labor intensive and expensive to acquire large data-sets for training models. Hence, it is often desirable to improve model fidelity by observing fewer example trajectories on the physical system. We propose a model-based reinforcement learning (control) framework that reuses collected data to improve the learned dynamical system models. We leverage Data As Demonstrator (DaD) [10] to generate synthetic training examples for improving the learned dynamics model’s multi-step prediction capabilities. However, the original DaD algorithm only handles uncontrolled dynamics. Here, we extend DaD to also work with controlled systems. Our experimental results with this mehtod show it is possible to achieve good control performance with less data. 2 Technical Approach 2.1 Preliminaries In this work, we consider systems that operate as a Markov Decision Process (MDP). The MDP is defined by states [$$x\_t$$] that follow an unknown state transition (dynamics) function [$$f(x\_t,u\_t)\rightarrow x\_{t+1}$$], where [$$u\_t$$] are controls (actions). We additionally assume a known cost function [$$c : x\_t,u\_t \rightarrow \mathbb {R}$$]. Solving this MDP consists of minimizing the (expected) cumulative cost over a time horizon T, which may be infinite, by finding a control policy [$$\pi (x\_t)$$]: [$$\begin{aligned} \pi&= \mathop {\mathrm{arg\,min}}\limits \_{\pi } \sum \_{t=0}^{T-1} c(x\_t, u\_t) \quad \text {s.t.} ~ u\_t = \pi (x\_t) ~ \text {and} ~ x\_{t+1} = f(x\_t, u\_t). \end{aligned}$$] (1) Model-based reinforcement learning (MBRL) attempts to solve the above in situations where the underlying dynamics and sometimes the cost function are unknown, adding the burden of deriving estimators of both. In this work, we assume knowledge of the cost function and focus solely on system identification in which we fit a function approximator [$$\hat{f}$$] to be used as the dynamics constraint for the policy optimization in Eq. 1. System identification has been studied in the traditional controls literature [11] and in the machine learning community [12, 13]. Some of these approaches [13, 14] can provide performance guarantees in the infinite-data and model-realizable case (i.e. the true underlying model is linear), while others [8, 12] optimize the the single-step predictive criterion [$$\begin{aligned} \hat{f} = \mathrm{arg\,min}\sum \_{t=1}^{T-1} ||x\_{t} - \hat{f}(x\_{t-1}, u\_{t-1}) ||\_2^2 \end{aligned}$$] (2) from a data-set of trajectories [$$\{(x\_0, u\_0) \ldots , (x\_{T-1},u\_{T-1})\}$$] of state-action pairs collected from the system. The downside of optimizing Eq. 2 is that errors can compound (up to exponentially [10, 15]) when using [$$\hat{f}$$] to forward predict multiple time-steps into the future. To solve this problem, we propose an extension to the algorithm presented in [10] for learning multi-step predictive models for system identification in the controlled setting and we experimentally verify that this improves the efficiency of MBRL methods. 2.2 System Identification for Control Simply collecting system trajectories, learning the dynamics, and optimizing the control policy typically results in inaccurate or unstable dynamics models and poorly performing control policies. An iterative process to achieve better performance was formalized in the MBRL literature [16, 17], generating a procedure similar to the one outlined in Fig. 1(left). By alternating between fitting the dynamics model and collecting new data under the distribution induced by the policy, the model becomes better at capturing the dynamics over the important regions of the state-space while the control policy derived from the dynamics is either improved over that region or erroneously exploits inaccuracies in the dynamics model. Thus in each iteration, a good policy is found or data is collected from the controller’s mistakes for improvement at the next iteration. [] Fig. 1. (Left) DAgger system identification for model learning and control. (Right) DaD improves the multi-step prediction performance of learned dynamics models. We specifically refer to the left loop in Fig. 1 as the DAgger (Data-set Aggregation) system identification learning framework [18]. A key difference lies in the aggregation step of the procedure in order to provide model agnostic guarantees. At the beginning of the algorithm, DAgger initializes an empty training data-set and an exploration policy [$$\pi \_{\text {explore}}(x\_t)$$] that generates an action (control) [$$u\_t$$] given a state [$$x\_t$$]. This initial policy can either consist of random controls (referred to as a random policy) or be an expert demonstration. Then, DAgger iteratively proceeds by: (1) executing the latest policy to collect a set of new trajectories [$$\{\xi \_i\}\_{i=0}^{k-1}$$] where [$$\xi \_i = \{(x\_t,u\_t)...\}\_i$$] is a time series of state-action pairs; (2) aggregating the trajectories [$$\{\xi \_i\}\_{i=0}^{k-1}$$] into the training data-set; (3) learning from the data-set a forward dynamics model [$$\hat{f}(x\_t,u\_t)\rightarrow x\_{t+1}$$]; (4) optimizing a new control policy [$$\pi $$] that minimizes a given cost function [$$c(x\_t,u\_t)$$] over the time horizon T of the control problem; (5) tracking the best policy from all those generated. During the execution of the first DAgger loop, the state distribution induced by [$$\pi $$] can greatly differ from the initial [$$\pi \_{\text {explore}}$$]; the first generated policies may perform poorly due to inaccuracies in [$$\hat{f}$$]. The iterative procedure refines the dynamics model by aggregating data from states induced by running the system with [$$\pi \_1,\ldots ,\pi \_N$$]. In particular, Ross et al. [18] provide theoretical guarantees for this algorithm, as long as we also sample states from the exploration distribution when aggregating data. This can be simply obtained by aggregating additionally a constant percentage of trajectories obtained from the exploration distribution. For example, this can be obtained by sampling from the original dataset or running the system with [$$\pi \_{\text {explore}}$$]. This helps prevent the learner from focusing only on the induced distributions from the policies. Finally, the algorithm does not guarantee that the policy gets monotonically better with every iteration. Thus, we must track the performance of the executed policies and return the best one obtained so far. 2.3 Improving the Multi-step Predictive Performance Despite the use of iterative procedures, MBRL methods can suffer from compounding errors during the policy optimization phase, either during the forward planning with the model or in the back-propagation of the model gradients [19]. The cascading error is due to sequential predictions performed with the learned model. By performing sequential predictions, the model is recursively applied and its previous output is fed as its new input [$$\begin{aligned} \hat{x}\_{t+1} = \hat{f}(\hat{x}\_t, u\_t). \end{aligned}$$] (3) This can result in a significant deviation from the true system. Ideally, we would address this by learning a dynamics model that is optimized for the multiple-step predictive performance (e.g. lagged-error criterion [20]) [$$\begin{aligned} \hat{f} = \mathrm{arg\,min}\sum \_{t=1}^{T-1} \sum \_{j=t}^{T-1} ||x\_{j} - \hat{x}\_{j|t} ||\_2^2, \end{aligned}$$] (4) where [$$\hat{x}\_{j|t}$$] is computed by applying Eq. 3 for [$$j-t$$] times to get the roll-out prediction [$$\hat{x}\_{j|t}$$] starting from [$$x\_t$$]. However, Eq. 4 is difficult to optimize. It is non-convex in the model parameters of our learner f and also differs from standard supervised learning loss functions. For these reasons, much of the dynamics learning literature focuses on optimization of the single-step loss (Eq. 2) utilizing existing supervised learning procedures such as Gaussian Process [8], Kernel Regression [9], and Support-Vector regression [21]. In order to achieve good multi-step predictive performance while using supervised learning methods, we recently introduced Data As Demonstrator (DaD) [10]. DaD is a meta-algorithm that augments the traditional dynamics learning method with an additional iterative procedure. The canonical dynamics learning method uses a learning procedure to find a model [$$\hat{f}\_0$$] that minimizes the single-step predictive performance. Conversely, to minimize the cascading error, DaD specifically targets the distribution induced from sequential application of the model. To this end, the algorithm performs “roll-outs” with the learned model (in gray, top-left of the DaD-loop, Fig. 1) along trajectories from the training data (shown in red, top-right of the DaD-loop, Fig. 1). Then, DaD generates synthetic data by creating new input-target pairs and pointing each prediction to the correct time-indexed state along the training trajectory¹. [] While we refer the reader to [10] for theoretical details, DaD (as presented in [10]) only handles uncontrolled dynamics. Here we introduce an extension to this algorithm to enable it to handle controlled systems to be used in the MBRL setting, as shown on the right side of Fig. 1. As detailed in Algorithm 1, we learn a forward dynamics by optimizing a supervised learning loss to predict targets [$$x\_{t+1}$$] from “features” [$$[x\_t,u\_t]$$]. Also in this case, we rely on a data aggregation procedure on the training data-set. When executing the roll-out of the model (line 6, Algorithm 1), we start at the state [$$x\_0$$] taken from the first timestep of the trajectory [$$\xi \_k$$] and forward simulate by performing recursive updates (Eq. 3) with the learned model [$$\hat{f}\_n$$] and the true sequence of controls [$$\{u\_t\}$$] from [$$\xi \_k$$]. Then, when augmenting the data-set, we utilize the original control and the estimated state to create an input-target pair [$$([\hat{x}\_t, u\_t], x\_{t+1})$$], (line 7, Algorithm 1). Here, differently from the procedure in [15], during the DaD step we do not separate the state transition dynamics from the controls but do a joint optimization of the model. Intuitively, our algorithm (detailed in Algorithm 1) aims to give the learner synthetic recovery examples to compensate for the compounding error. In practice, as we want to upper-bound the loss for the learner (i.e. make the learning problem easier for finding [$$\hat{f}\_{n+1}$$]), we discard examples during the aggregation step if the error was too high (significantly higher than the trajectory’s magnitude). In the next section, we experimentally verify that this extension allows us to find better control policies with less data than the traditional approach. 3 Experimental Evaluation We evaluate our algorithm (‘DAgger+DaD’) both on simulated dynamical systems² and real robotic platforms. In particular, we consider two simulated scenarios: the classic cartpole swing-up problem and the challenging helicopter hovering problem. Additionally, we show the applicability of our approach on real systems such as the Videre Erratic mobile base and the Baxter robot. In each described experiment, we learn dynamical models of the form: [$$\begin{aligned} \varDelta \_t \leftarrow f(x\_t, u\_t),\quad \text {where}\ \varDelta \_t = x\_{t+1} - x\_t. \end{aligned}$$] (5) This parametrization is similar to [17], where the previous state is used as the mean prior for predicting the next state. Due to the difficulty of optimizing Eq. 1 under arbitrary dynamics and cost models, for simplicity, we focus on minimizing a sum-of-quadratics cost-to-go function: [$$\begin{aligned} \sum \_t c(x\_t,u\_t) = \sum \_t x\_t^T Q\_t x\_t + u\_t^T R\_t u\_t. \end{aligned}$$] (6) By using this form of cost function, along with a linearization of the learned dynamics model, we can formulate the policy synthesis problem as that of a Linear Quadratic Regulator, which allows the policy to be computed in closed-form. In each experiment, we compare ‘DaD+DAgger’ to ‘DAgger Only’. For Cartpole, Erratic, and Baxter the data-set was initialized with a random exploratory policy, while the helicopter problem received both a random and an expert policy (generated form LQR on the true dynamics) roll-out for initialization. The simulated cartpole and helicopter experiments got additional exploratory roll-outs on every iteration of DAgger with the random and expert policies respectively. For the Baxter robot experiment, instead, we achieved exploration through an [$$\epsilon $$]-random controller that added random perturbation to the commanded control with [$$\epsilon $$] probability. For each method, we report the average cumulative cost at each iteration of DAgger as averaged over ran trials. Three trials were run on the Erratic while five were ran for other benchmarks. The charts that illustrate the obtained results are all normalized to the highest observed cost, since the cost functions are tuned to promote the desired control behavior rather than to have a physical interpretation. 3.1 Simulation Experiments Cartpole swing-up: The cartpole swing-up is a classic controls and MBRL benchmark where the goal is to swing-up a pendulum by only applying a linear force on the translatable base. We learn a linear dynamics model in the form of Eq. 5 using Ridge Regression (regularized linear regression). We then use an iterative Linear Quadratic Regulator [22] (iLQR) controller about a nominal swing-up trajectory in state-space with an initial control trajectory of zeros. The iLQR optimization procedure finds a sequence of states and controls feasible under the learned dynamics model to minimize the cost. The simulated system has system-transition noise and we compare our algorithm’s performance both with and without control noise to simulate the effects of noisy actuation on a real-robot. We show results in Fig. 2 of the evaluated trajectory costs accumulated over the problem’s time horizon. [] Fig. 2. Controlling a simulated cartpole for swing-up behavior. [] Fig. 3. Controlling a simulated helicopter to hover. Note the log-scale on cost. Helicopter simulator: Helicopter hovering is a difficult problem due to the instability of the dynamical system, especially under noise. We utilize the helicopter simulator from [16] with additive white noise and follow a problem setup similar to [18]. We make the problem more difficult by initializing the helicopter at states up to 10 meters away from the nominal hover configuration. As the dynamics are highly nonlinear, we show the advantage of using Random Fourier Features (RFF) regression [23] to learn a dynamics model in a 21-dimensional state space. We find a steady-state linear quadratic regulator (LQR) policy to map the helicopter’s state to the 4-D control input. The results in Fig. 3 show that DaD dramatically improves performance over only DAgger. 3.2 Real-Robot Experiments Videre Erratic: In this experiment, we control the velocity of a Videre Erratic mobile base. The goal is to drive the robot to a given position specified in the robot’s reference frame. The 3-D state vector includes the robot position and orientation while the 2-D control vector is the robot velocity. The dynamics model is learned using Ridge Regression. Unlike the other experiments, we use a trajectory-control policy that finds a sequence of controls [$$u\_1,\ldots ,u\_T$$] to apply open-loop at run time on the robot. We compute the control sequence by simulating the learned dynamics model [$$\hat{f}$$] with a simple proportional controller. Results are shown in Fig. 4. [] Fig. 4. Results for controlling a Videre Erratic differential-drive mobile robot. [] Fig. 5. Results on controlling a Baxter robot. We learn a dynamics model and compute a control policy to move the robot manipulator from state [$$x\_0$$] to [$$x\_T$$]. Baxter robot: We use the ‘DAgger+DaD’ approach to control a 7-degree-of-freedom manipulator to a target joint configuration. We command the robot arm in torque control mode with suppression of the inbuilt gravity compensation. The 14-dimensional state vector consists of the joint angles and their velocities. We learn the dynamics model using Ridge Regression and compute a steady-state LQR control policy, obtaining the results in Fig. 5. 4 Discussion In our simulation experiments we compared the performance obtained by applying ‘DAgger+DaD’ on a cartpole with and without control noise. Results show that the improvement of our method over ‘DAgger Only’ decreases in presence of actuation noise. This can be explained by the fact that, over the same generated nominal controls, the state trajectories obtained during each roll-out are slightly different and represent a limitation on the efficacy of the learner over the same number of iterations – i.e. there is a higher baseline error in the dynamics model. [] Fig. 6. Comparison of exploration policies. Cost values are not normalized across plots. In the case of the helicopter, we additionally compared the results obtained by using two different learning algorithms and by applying different exploration policies. For the former, we compared the non-linear RFF [23] regression against linear regression. As shown in Fig. 3(b), the nonlinear learner performs much better as it better captures the heavy nonlinearity of the helicopter dynamics. The DAgger method [18] requires drawing state-transition samples at every iteration from some exploration distribution. In Fig. 6(a), we compare using an expert exploration policy (LQR controller using the true dynamics) versus a random-control exploration policy. With DAgger+DaD, the learned dynamics and policy yield a stable behavior for both types of exploration, with some improvement using the expert policy. The DAgger Only baseline often is unable to learn a stable policy using the random exploration policy. We believe that DAgger+DaD learns a more stable multi-step predictive dynamics model – an important aspect for the Bellman backup during policy optimization. An interesting observation is that DAgger+DaD without the exploration policy does not lead to a significant performance difference (Fig. 6(b)) compared to the ‘DAgger Only’ baselines. This comparison shows the difference between [20] (no exploration) and [18] (constant fraction exploration). Note that to keep the amount of data constant in the trials without the exploration trajectories, the learners were given the difference as test trials under the current optimized policy. The real-robot evaluations show the applicability of our method on real systems and complex platforms. In particular, the Erratic experiments show that by using DaD, we are indeed able to get a better dynamics model for forward-prediction. This model can be used for trajectory generation and optimization as described in Sect. 3.2, where the sequence of obtained controls has been directly applied to the Erratic in an open-loop as a control trajectory. While the application of ‘DAgger+DaD’ on the Baxter robot results in a limited performance improvement, this confirms our hypothesis that, in robotic platforms characterized by high actuation noise (e.g. Baxter’s chain of noisy actuators), only smaller improvements over ‘DAgger Only’ can be achieved (consistent with the simulated noisy-actuation result in Fig. 2(b)). Additionally, the considered problem on the Baxter is relatively simple with control authority at every joint. In these settings, DAgger seemingly can still efficiently capture the dynamics of the system with only a minor benefit from the additional DaD loop. Acknowledgements This material is based upon work supported in part by: National Science Foundation Graduate Research Fellowship Grant No. DGE-1252522, National Science Foundation NRI Purposeful Prediction Award No. 1227234, and ONR contract N000141512365. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. References 1. Schaal, S., et al.: Learning from demonstration. In: NIPS, pp. 1040–1046 (1997) 2. Bakker, B., Zhumatiy, V., Gruener, G., Schmidhuber, J.: Quasi-online reinforcement learning for robots. In: ICRA, pp. 2997–3002 (2006) 3. Hester, T., Quinlan, M., Stone, P.: RTMBA: A real-time model-based reinforcement learning architecture for robot control. In: ICRA, pp. 85–90 (2012) 4. Thrun, S.: An approach to learning mobile robot navigation. RAS 15(4), 301–319 (1995) 5. Matarić, M.J.: Reinforcement learning in the multi-robot domain. In: Arkin, R.C., Bekey, G.A. (eds.) Robot Colonies, pp. 73–83. Springer, New York (1997). doi:10.​1007/​978-1-4757-6451-2\_​4 CrossRef 6. Duan, Y., Liu, Q., Xu, X.: Application of reinforcement learning in robot soccer. Eng. Appl. Artif. Intell. 20(7), 936–950 (2007)CrossRef 7. Konidaris, G., Kuindersma, S., Grupen, R., Barto, A.: Robot learning from demonstration by constructing skill trees. IJRR 0278364911428653 (2011) 8. Ko, J., Klein, D.J., Fox, D., Haehnel, D.: GP-UKF: Unscented Kalman filters with Gaussian process prediction and observation models. In: IROS, pp. 1901–1907 (2007) 9. Bagnell, J.A., Hneider, J.G.S.: Autonomous helicopter control using reinforcement learning policy search methods. ICRA 2, 1615–1620 (2001) 10. Venkatraman, A., Hebert, M., Bagnell, J.A.: Improving multi-step prediction of learned time series models. In: AAAI, pp. 3024–3030 (2015) 11. Van Overschee, P., De Moor, B.: N4SID: Subspace algorithms for the identification of combined deterministic-stochastic systems. Automatica 30(1), 75–93 (1994)MathSciNetCrossRefMATH 12. Ghahramani, Z., Roweis, S.T.: Learning nonlinear dynamical systems using an EM algorithm. In: NIPS, pp. 431–437 (1999) 13. Siddiqi, S.M., Boots, B., Gordon, G.J.: A constraint generation approach to learning stable linear dynamical systems. In: NIPS (2007) 14. Van Overschee, P., De Moor, B.: Subspace identification for linear systems: theory implementation applications. Springer Science & Business Media, New York (2012)MATH 15. Venkatraman, A., Boots, B., Hebert, M., Bagnell, J.A.: Data as demonstrator with applications to system identification. In: ALR Workshop, NIPS (2014) 16. Abbeel, P., Ng, A.Y.: Exploration and apprenticeship learning in reinforcement learning. In: ICML, pp. 1–8. ACM (2005) 17. Deisenroth, M., Rasmussen, C.E.: Pilco: a model-based and data-efficient approach to policy search. In: ICML, pp. 465–472 (2011) 18. Ross, S., Bagnell, D.: Agnostic system identification for model-based reinforcement learning. In: ICML, pp. 1703–1710 (2012) 19. Heess, N., Wayne, G., Silver, D., Lillicrap, T., Erez, T., Tassa, Y.: Learning continuous control policies by stochastic value gradients. In: NIPS, pp. 2926–2934 (2015) 20. Abbeel, P., Ganapathi, V., Ng, A.Y.: Learning vehicular dynamics, with application to modeling helicopters. In: NIPS, pp. 1–8 (2005) 21. Müller, K.-R., Smola, A.J., Rätsch, G., Schölkopf, B., Kohlmorgen, J., Vapnik, V.: Predicting time series with support vector machines. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, J.-D. (eds.) ICANN 1997. LNCS, vol. 1327, pp. 999–1004. Springer, Heidelberg (1997). doi:10.​1007/​BFb0020283 CrossRef 22. Li, W., Todorov, E.: Iterative linear quadratic regulator design for nonlinear biological movement systems. In: ICINCO, vol. 1, pp. 222–229 (2004) 23. Rahimi, A., Recht, B.: Random features for large-scale kernel machines. In: NIPS (2007) Footnotes 1 Trajectories can be sub-sampled shorter than the control problem’s time horizon.   2 Simulators, except the helicopter, available at https://​github.​com/​webrot9/​control\_​simulators with C++ and Python APIs.   Mobile Robots 2 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_62 Data Correlation and Comparison from Multiple Sensors Over a Coral Reef with a Team of Heterogeneous Aquatic Robots Alberto Quattrini Li¹  , Ioannis Rekleitis¹  , Sandeep Manjanna²  , Nikhil Kakodkar²  , Johanna Hansen²  , Gregory Dudek²  , Leonardo Bobadilla³  , Jacob Anderson⁴   and Ryan N. Smith⁴   (1) University of South Carolina, Columbia, South Carolina, USA (2) McGill University, Montreal, Québec, Canada (3) Florida International University, Miami, Florida, USA (4) Fort Lewis College, Durango, Colorado, USA     Alberto Quattrini Li Email: albertoq@cse.sc.edu   Ioannis Rekleitis (Corresponding author) Email: yiannisr@cse.sc.edu   Sandeep Manjanna Email: msandeep@cim.mcgill.ca   Nikhil Kakodkar Email: nikhil@cim.mcgill.ca   Johanna Hansen Email: jhansen@cim.mcgill.ca   Gregory Dudek Email: dudek@cim.mcgill.ca   Leonardo Bobadilla Email: bobadilla@cs.fiu.edu   Jacob Anderson Email: jaanderson@fortlewis.edu   Ryan N. Smith Email: rnsmith@fortlewis.edu Abstract This paper presents experimental insights from the deployment of an ensemble of heterogeneous autonomous sensor systems over a shallow coral reef. Visual, inertial, GPS, and ultrasonic data collected are compared and correlated to produce a comprehensive view of the health of the coral reef. Coverage strategies are discussed with a focus on the use of informed decisions to maximize the information collected during a fixed period of time. 1 Introduction In this paper we consider the use of an ensemble of disparate robotic subsystems for data collection on a tropical coral reef. By using multiple different mobile sensing systems, we can achieve advantages in terms of data coverage and diversity over what can be achieved by single or multiple homogeneous systems, due to the ability of different systems (with different intrinsic costs) to fill gaps in the overall information profile of the ecosystem being examined. Coral reefs are a critical part of the marine ecosystem and support a rich diversity of life with consequent economic value and social amenity. However, sea surface temperatures have increased over the past few decades, resulting in widespread coral bleaching at an ever-increasing rate. Projected increases in global temperatures of 2–4.5 [$$^{\circ }$$]C by 2100 [1] indicate that mass coral bleaching events are likely to become an annual phenomenon by 2050. The widespread mortality of corals following mass bleaching events reduces the structural complexity of reefs – eliminating the three dimensional habitat, which is critical in maintaining diversity and sustainable populations of coral-reef fish communities; see Fig. 1. The first step in mitigating this degradation is to be able to observe and measure it. [] Fig. 1. Representative photographs of the decline in coral reefs due to coral bleaching. Despite the importance of coral reef ecosystems, spatial and temporal dynamics of coral bleaching events are poorly understood. Most surveys of coral bleaching have been conducted using either human divers or remote sensing (satellite imagery), but these approaches are fundamentally limited in scale or definition. Satellites are able to cover large spatial scales, tens of square meters to square kilometers, and approximate percentage of coral cover, however, they are unable to resolve fine-scale ecological changes (centimeters to meters) of individual corals. In situ measurement by divers can provide such data, but at limited spatial scales (<1 km[$$^2$$]) and at high cost. Automating the task of surveying an area will be extremely valuable; for example, collecting data over time and thus monitoring the health status of individual corals and the reef in general. Earlier work on multi-robot monitoring has focused on targeted data acquisition with the guidance of a marine biologist, and not on extended coverage [2] or data fusion between heterogeneous assets. Aquatic robots provide a novel solution to the fundamental problem of large-scale precise coverage, and represent a viable approach to quantifying the extent of fine-scale coral bleaching and reef structural complexity, for long periods of time (days to weeks), and most importantly, at comparatively low cost. There are four critical research problems in better understanding coral reef health: (1) monitoring of individual coral with respect to bleaching and structural complexity; (2) quantifying changes in coral bleaching and structural complexity over time; (3) using these measures to assess reef and reef ecosystem health, and (4) to predict reef health under various scenarios. This paper describes ongoing research to help addressing the first two problems, while eventually gathering the data necessary to help address the third and fourth. More specifically, in this paper we use a combination of several classes of robotic assets to jointly survey a coral reef in the Caribbean Sea. Each class of vehicle provides complementary operational characteristics, sensing modalities, and time-scale of operations. [] Fig. 2. The different vessels used in the proposed approach (a) Drifting sensor node; (b) Kingfisher Autonomous Surface Vehicle; (c) EcoMapper Autonomous Underwater Vehicle. 2 Technical Approach Given the large spatial extents of coral reefs, it is impossible to sample everywhere at very high resolution with short-term deployment (hours) robotic assets. Moreover, many monitoring efforts have serious limitations not only in personnel, but in the resources necessary to operate a monitoring effort. We seek to examine the utility of persistent vehicles, which may provide a sparse sampling of the environment, to inform sampling missions to be executed by shorter-duration vehicles with high-resolution sensor packages. In this paper, we examine the utility of a heterogeneous fleet of aquatic robots to cooperatively assess large-scale aquatic ecosystems, with a specific focus on representing the responses of coral reef ecosystems to climate change. We approach this problem through a unique blend of persistent aquatic sampling, and vision-based sensing. The flotilla consists of six, low-cost, passive floating sensors (drifters) shown in Fig. 2(a), an Autonomous Surface Vehicle (ASV) shown in Fig. 2(b), and an Autonomous Underwater Vehicle (AUV) shown in Fig. 2(c). The drifters employ a monocular camera to gather image data over the reef. The ASV is equipped with a downward-looking, stereo camera, and has the ability to navigate to any chosen location. The AUV collects a range of water quality data (e.g., temperature, pH, salinity, dissolved oxygen, etc.) and gathers side-scan sonar data. This paper focuses on the image data products from each of the vehicles and compares them against to each other to understand the viability of such a setup for informed path planning. The extension to include water quality data fused with image and sonar data to underpin intelligent sampling is part of future work. Visual Mosaic from Drifting Nodes A set of inexpensive drifters has been developed to collect visual, inertial, and GPS data over moderate periods of deployment. They operate together as a passive sensor network and can be deployed manually for interaction with each other or with other vehicles. Similar drifters have been used in several research projects in marine data collection and ocean observation [3–7]. The drifters used here are different from most used previously in that they collect image data using downward-looking cameras. Currently their operation has been limited during deployments to eight hours based on the installed battery and resource allocation to track them over longer periods. The power consumption is minimal, and can easily be extended to multiple day deployments by installing a larger battery. These drifters are based on the Raspberry PI 2 computing unit and camera, and they are equipped with an inertial measurement unit (IMU) and GPS sensor. One of the main objectives of these drifters was to model Lagrangian water dynamics, by tracking the motion of a body of water [8] over long distances. The ability to collect visual data enabled the use of these drifters in a coral reef monitoring function [9]. The motion of the drifting sensor nodes at the surface includes a significant amount of bobbing from local wave action, as they are not stabilized in pitch or roll; see Fig. 2(a). Consequently each drifter rotates off the vertical by [$$30^{\circ }$$]–[$$45^\circ $$], thus expanding the regular field of view (FoV) of the camera of [$$60^\circ $$] to more than [$$120^\circ $$]. In addition, the slow Lagrangian motion of the drifters results in multiple images of the same area taken from the slightly different positions, allowing for a more robust multi-view reconstruction from the monocular camera. A modular visual pipeline framework based on OpenCV has been implemented to create patches of visual mosaics. In addition, according to the specific datasets, it is possible to choose different feature detection algorithms, descriptors, and feature matching algorithms. Augmenting vision with inertial data enables the recovery of the drifter’s attitude and discarding of the images with sharp rotations that could result in loss of tracking. Being geo-referenced, the large amount of visual data collected is cross-referenced with the data collected from the other assets. Data Collection from Other Vehicles The ASV employed in this research is the Clearpath Kingfisher ASV [10]. This vehicle is equipped with GPS, IMU and a downward-looking, stereo GoPro camera. The AUV employed in this research is the YSI EcoMapper [11]. This vehicle is equipped with a water quality sonde, GPS, IMU, compass, side-scan sonar and upward- and downward-looking Doppler Velocity Loggers. Conversely to the operation of the drifters, these two vehicles control their own motion and navigate to specific locations rather than drift. For this research, both vehicles conducted a regular, lawnmower-type sampling strategy over the area of interest to ensure complete coverage and maximal data overlap, see Fig. 3. A primary focus of this research is to examine and fuse the information gathered to reduce the sampling burden on the short-deployment vehicles (ASV and AUV) while providing a persistent sampling presence and maximizing coverage of important regions within a reef environment. Coverage Strategies An important feature of an anytime algorithm, and in particular the coverage algorithm used by our ASV is to visit the areas of interest in decreasing order of value. This becomes significant when the task is to collect data in limited time and efficient complete coverage [12] is not possible, especially with uncertain deadlines as may occur in the presence of currents (which impact power consumption and efficiency) and variable weather. After the long-term assets (drifters) have collected enough information, the short-term assets, i.e., ASV and AUV, travel to select locations to collect additional information. A selective coverage algorithm with the behavior of choosing the salient regions to examine first is tested using the ASV [13]. This algorithm is value-iteration-based, which covers the entire region of interest, but in a prioritized fashion. The selective coverage algorithm assumes an underlying distribution for the phenomenon that needs to be modeled and builds an off-line trajectory to cover the high probability regions first, while simultaneously reducing the travel time and energy consumption. In the case of coral monitoring, the actual depth map is utilized as an input distribution to cover the shallow regions first, thus making sure the coral-populated areas are fully covered before the time constraint is reached. One fundamental assumption made here is that, in a given area, the corals exist at shallower depths than the sand; an assumption based on observations during field trials of the specific deployment location, and input from experts in marine robotics. Based on this assumption, covering the shallower regions would provide a good coverage of the corals in the selected region. Data Comparison and Fusion Comparison of the collected heterogeneous data, potentially taken at different times, requires alignment and/or registration between the datasets. This has proven to be a difficult problem for large underwater datasets [14] as growth of algae and coral significantly change their appearance over time, as well as the presence of lighting differences and different data modalities. After testing multiple different image registration techniques, including feature-based matching and Zernike moments for image recognition [15], FAB-MAP [16] was selected to recognize places based on their appearance. In particular, in the domain considered in the paper, preliminary experiments showed that the combination of STAR-SURF as a feature detector and descriptor extractor had better results in terms of number of detected features; see also the work of Quattrini Li et al. [17]. Images randomly selected from the drifters and the ASV were used for creating the training dataset to build the visual vocabulary. [] Fig. 3. The GPS traces of the three different kinds of vessels: Drifters red, ASV yellow, AUV orange. 3 Experiments An area in the Folkestone Underwater Park and Marine Reserve, Barbados, was selected for deploying three heterogeneous, data-collecting assets; six drifters were randomly placed over a shallow area over multiple deployments, a Kingfisher ASV and EcoMapper AUV executed the coverage strategies described in the previous section. Figure 3 shows the region covered by each asset, and the overlapping regions covered by the three vessels. The total area is approximately 100 m [$$\times $$] 100 m, and experiments were carried out during different days over a period of one week. Over the course of the experiments one of the drifters went missing, and was never recovered. 4 Results Images with resolution of [$$640 \times 480$$] were captured by the camera mounted on each of the drifters. The cameras operated at 10 FPS with a field-of-view of [$$\sim $$]60[$$^{\circ }$$]. The Kingfisher ASV collected stereo and monocular images using GoPro cameras mounted at the bottom of the vessel. Sonar data were collected with a Starfish Side-Scan sonar attached to a YSI EcoMapper AUV. The sonar operates at a frequency of 450 kHz, and has a range of 1 m–100 m, with a vertical beam width of [$$60^{\circ }$$]. Given an average height of 5 m over the survey area, the sonar swaths were approximately 8.7 m. A sample of the different data resulting from the three vessels is presented in Fig. 4. [] Fig. 4. Data collected from the different assets (a) A mosaic patch from images of the drifter showing a coral; (b) Kingfisher ASV; (c) EcoMapper SONAR. [] Fig. 5. Mosaic from a section of the Kingfisher’s trajectory. [] Fig. 6. The image matches between the images from kingfisher and the drifter that were reported by FAB-MAP. [] Fig. 7. Depth maps generated by interpolating the altitude measurements collected by two vehicles: (a) from Kingfisher robot (b) EcoMapper. The color-bars indicate the depth in meters. The drifters, even being deployed over multiple hours and days, covered a comparatively smaller area than the other vessels. This is due to the fact that they move passively, being carried by ocean currents. During the particular deployment reported here, strong surge caused the drifters to spend a significant amount of time trapped near the shore. [] Fig. 8. Selected images from the coverage. The dotted line on the image shows the coverage trajectory of the Kingfisher robot. The color-bar indicates the time in seconds. Triangle and the circle indicate the start and end points of the path respectively. Examining and comparing the data products of the drifters and the ASV, it is possible to observe the lower camera quality of the drifter. The higher resolution of the images from the ASV and its stability in terms of rotation compared to the drifters provide a better stitched mosaic; see Fig. 5. FAB-MAP was able to match some of the images from the two vessels covering the same area. Figure 6 shows a sample of images from the ASV (left) and drifters (right) that matched with the probability of association reported on the arrows. Corals have a very distinctive appearance, and the images from the two different vessels are able to match over most regions containing corals in the images. However, false positives can occur because of similar structure appearing underwater. Note that the distance of the GPS associated to the related images is 5 m for the correct matched images, while about 30 m for the mismatched images. Relating the data coming from the sonar and the cameras however did not provide the expected results, as a variety of techniques for aligning the images were proven unsuccessful. In Fig. 7, depth maps generated by the by interpolating the point measurements of the depth collected by ASV and AUV using Gaussian Processes are shown. The point measurements provided by the sonar pingers mounted on the two vessels gave a good estimate of depth over the area, producing similar maps. Figure 8 shows the coverage trajectory followed by the robot, using the value-iteration-based method described above. The robot strategy covers the region such that the information rich areas are covered first. Thus maximizing information gain in minimum time. 5 Discussion and Experimental Insights From the experiments carried out in the Caribbean Sea, the following experimental insights emerged. - The drifters have a very low impact in the environment allowing them to record undisturbed marine life at a very close distance; see Fig. 9 where a short-fin Atlantic squid is recorded floating under the drifter at close range. - Given the limited range and limited controllability of the drifters, enhancing the capabilities of these assets will be a first priority; particularly their ability to move vertically in the water column. From the preliminary data collected, they show great promise by providing the right type of data and persistence in the environment. - Images collected by low-resolution and high-resolution cameras can be used to recognize the same places. This corroborates the idea of using a low-cost persistent vessel to inform the other vehicles about areas of interest where to sample. In addition, the images synchronized with GPS information can be used to localize AUVs underwater, where GPS information is denied. However, more work need to be done to post-process the images so that they can be recognized. A reliable place recognition module is necessary because GPS information can have significant error obstructing the high-resolution mapping of a specified area. - The data that can be extracted from cameras and from the sonar, although they might look similar, are of a different kind and the difference in terms of coverage, resolution, and feature recognition does not allow currently for reliable alignment. Future work will focus on correlating features between these two sensing modalities. - We used a selective coverage technique which selectively chooses the locations with high information to visit based on an underlying prior. This approach reduced the distance traveled to collect data by 48% compared to traditional methods such as repeated linear transects [13]. [] Fig. 9. A longfin squid in an image captured by a drifter. 6 Conclusions and Future Works We have shown the use of three different types of vessels, with a specific focus on their data products. This analysis introduces new research areas, including coordinated planning of the vessels and handling communication constraints, so that large-scale coral reef assessment can be enabled. Given long-term assets, ongoing work will build on the concept of controlled drift developed by Smith et al. [7, 18]. Specifically, a general desired trajectory can be achieved by using known and/or predicted ocean currents. Directly related to the areas where coral reefs exist, these platforms can operate in dynamic, coastal environments, as well as confined regions, e.g., embayments, and navigate between pre-designated waypoints. Given the capabilities of the long term assets and the possibility to align data from the different vessels, informed viewpoint planning could be used for a complete 3D reconstructions of coral reefs. The concept of finding an optimal viewpoint based on maximal information gain has been widely studied in the perception and manipulation communities [19–21]. In these communities, the problem is referred to as the Next-Best View (NBV) problem. The primary challenge of mission adaptation using online viewpoint planning is that the optimization constraints, specifically an overall time constraint, pose an NP hard problem (similar to the traveling salesman problem). Starting from the work of McKinnon et al. [22, 23], in which highly accurate 3D reconstructions of branching coral with complex geometry were computed, future work extends the NBV problem by computing the views that minimize camera reprojection error over the entire scene. Relaxation of the optimization constraints combined with an innovative planning strategy based on Gaussian Processes will compute a feasible path for an AUV. Some other research directions that are worth future investigations are: - A central question is to qualify the gains from utilizing different assets for different data collection modalities. Specifically, over a shallow reef, does the persistence of the drifters outweigh the speed of the EcoMapper? - Supposing that all assets are collecting data simultaneously, we are interested in what communication capabilities between the devices would be useful. - Based on experience, the drifters tend to drift away, understandably. We would like to investigate the problem of drifter collection via a surface vehicle. What kind of control/communication is necessary on-board the drifters in order to ensure that no drifter is lost? Acknowledgment The authors would like to thank the generous support of the Google Faculty Research Award and the National Science Foundation grants (NSF 0953503, 1513203, 1526862, 1637876). References 1. IPCC: Climate Change 2007: The physical science basis. In: Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K., Tignor, M., Miller, H. (eds.) Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press (2007) 2. Shkurti, F., Xu, A., Meghjani, M., Higuera, J.C.G., Girdhar, Y., Giguere, P., Dey, B.B., Li, J., Kalmbach, A., Prahacs, C., Turgeon, K., Rekleitis, I., Dudek, G.: Multi-domain monitoring of marine environments using a heterogeneous robot team. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Portugal, pp. 1447–1753 (2012) 3. Das, J., Py, F., Maughan, T., Oreilly, T., Messie, M., Ryan, J., Rajan, K., Sukhatme, G.S.: Simultaneous tracking and sampling of dynamic oceanographic features with autonomous underwater vehicles and lagrangian drifters. In: Khatib, O., Kumar, V., Sukhatme, G. (eds.) Experimental Robotics. Springer Tracts in Advanced Robotics, pp. 541–555. Springer, Heidelberg (2014). doi:10.​1007/​978-3-642-28572-1\_​37 CrossRef 4. Meghjani, M., Shkurti, F., Higuera, J.C.G., Kalmbach, A., Whitney, D., Dudek, G.: Asymmetric rendezvous search at sea. In: Canadian Conference on Computer and Robot Vision (CRV), pp. 175–180. IEEE (2014) 5. Jouffroy, J., Zhou, Q., Zielinski, O.: Towards selective tidal-stream transport for Lagrangian profilers. In: MTS/IEEE Oceans, Kona, HI (2011) 6. Jouffroy, J., Zhou, Q., Zielinski, O.: On active current selection for Lagrangian drifters. Robotics and Autonomous Systems (2012) 7. Smith, R.N., Huynh, V.T.: Controlling buoyancy-driven profiling floats for applications in ocean observation. IEEE J. Oceanic Eng. 39(3), 571–586 (2014)CrossRef 8. Boydstun, D., Farich III, M., J.M., Rubinson, S., Smith, Z., Rekleitis, I.: Drifter sensor network for environmental monitoring. In: 12th Conference on Computer Robot Vision, Halifax, Nova Scotia, Canada, pp. 16–22, June 2015 9. Xanthidis, M., Quattrini Li, A., Rekleitis, I.: Shallow coral reef surveying by inexpensive drifters. In: MTS/IEEE Oceans Shanghai, pp. 1–9, April 2016 10. Albrecht, R.: Kingfisher ASV (2015). http://​www.​clearpathrobotic​s.​com/​kingfisher/​ 11. YSI Incorporated: YSI EcoMapper (2015). https://​www.​ysi.​com/​ecomapper 12. Xu, A., Viriyasuthee, C., Rekleitis, I.: Efficient complete coverage of a known arbitrary environment with applications to aerial operations. Auton. Robots 36(4), 365–381 (2014)CrossRef 13. Manjanna, S., Nikhil Kakodkar, M.M., Dudek, G.: Efficient terrain driven coral coverage using Gaussian processes for mosaic synthesis. In: Computer Robot Vision (CRV) (2016) 14. Johnson-Roberson, M., Pizarro, O., Williams, S., Mahon, I.: Generation and visualization of large-scale three-dimensional reconstructions from underwater robotic surveys. J. Field Robot. 27(1), 21–51 (2010)CrossRef 15. Khotanzad, A., Hong, Y.H.: Invariant image recognition by Zernike moments. IEEE Trans. Pattern Anal. Mach. Intell. 12(5), 489–497 (1990)CrossRef 16. Cummins, M., Newman, P.: Appearance-only SLAM at large scale with FAB-MAP 2.0. Int. J. Robot. Res. 30(9), 1100–1123 (2011). doi:10.​1177/​0278364910385483​ CrossRef 17. Quattrini Li, A., Coskun, A., Doherty, S.M., Ghasemlou, S., Jagtap, A.S., Modasshir, M., Rahman, S., Singh, A., Xanthidis, M., O’Kane, J.M., Rekleitis, I.: Vision-based shipwreck mapping: on evaluating features quality and open source state estimation packages. In: MTS/IEEE OCEANS Monterey, September 2016, accepted 18. Smith, R.N., Dunbabin, M.: Controlled drift: an investigation into the controllability of underwater vehicles with minimal actuation. In: Australasian Conference on Robotics and Automation, December 2011 19. Connolly, C.: The determination of next best views. In: IEEE International Conference on Robotics and Automation, pp. 432–435 (1985) 20. Elfes, A.: Using occupancy grids for mobile robot perception and navigation. Computer 22(6), 46–57 (1989)CrossRef 21. Torabi, L., Gupta, K.: An autonomous six-DOF eye-in-hand system for in situ 3D object modeling. Int. J. Robot. Res. 31(1), 82–100 (2012)CrossRef 22. McKinnon, D., Upcroft, B., Smith, R.N.: Towards automated and in-situ, near-real time 3-D reconstruction of coral reef environments. In: MTS/IEEE Oceans, Kona, Hawaii, September 2011 23. McKinnon, D., Smith, R.N., Upcroft, B.: A semi-local method for iterative depth-map refinement. In: IEEE International Conference on Robotics and Automation (ICRA), May 2012 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_63 Multi Robot Object-Based SLAM Siddharth Choudhary¹  , Luca Carlone², Carlos Nieto¹, John Rogers³, Zhen Liu¹, Henrik I. Christensen¹ and Frank Dellaert¹ (1) Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, USA (2) Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, USA (3) Army Research Lab, Adelphi, USA     Siddharth Choudhary Email: siddharth.choudhary@gatech.edu Abstract We propose a multi robot SLAM approach that uses 3D objects as landmarks for localization and mapping. The approach is fully distributed in that the robots only communicate during rendezvous and there is no centralized server gathering the data. Moreover, it leverages local computation at each robot (e.g., object detection and object pose estimation) to reduce the communication burden. We show that object-based representations reduce the memory requirements and information exchange among robots, compared to point-cloud-based representations; this enables operation in severely bandwidth-constrained scenarios. We test the approach in simulations and field tests, demonstrating its advantages over related techniques: our approach is as accurate as a centralized method, scales well to large teams, and is resistant to noise. 1 Introduction The deployment of multiple cooperative robots is an important asset for fast information gathering over large areas. In particular, multi robot SLAM, i.e., the cooperative construction of a model of the environment explored by the robots, is fundamental to geo-tag sensor data (e.g., for pollution monitoring, surveillance and search and rescue), and to gather situational awareness. In this paper we are interested in the case in which the robots operate under severe bandwidth constraints. In this context, the robots have to reconstruct a globally-consistent map by communicating a small amount of information among the teammates. Dealing with bandwidth constraints is challenging for two reasons. First, most approaches for multi robot SLAM imply a communication burden that grows quadratically in the number of locations co-observed by the robots [1]; these approaches are doomed to quickly hit the bandwidth constraints. In our previous works [2, 3] we alleviated this issue by proposing an approach, based on the distributed Gauss-Seidel method, which requires linear communication. The second issue regards the communication cost of establishing loop closures among robots. When the robots are not able to directly detect each other, loop closures have to be found by comparing raw sensor data; in our setup the robots are equipped with an RGBD camera and exchanging multiple 3D point clouds quickly becomes impractical in presence of communication bounds. In this paper we address this second issue by using an object-level map representation. Related Work. Traditional approaches for multi robot mapping use low-level primitives like points and lines to model the geometry of the environment [4]; these maps become memory-intensive in long-term operation, contain very redundant information (e.g., use thousands of points to represent a planar surface), and lack semantic understanding, which is a key element in a wide range of tasks (e.g., human robot interaction or manipulation). For these reasons, semantic mapping has attracted a conspicuous interest from the community, starting from early papers [5], to more recent works which use object templates [6], door signs [7], or planes [8] for mapping. A recent survey can be found in [9]. Distributed estimation in multi robot systems is currently an active field of research, with special attention being paid to communication constraints [10], heterogeneous teams [11] and robust data association [12]. The robotic literature offers distributed implementations of different estimation techniques, including Extended Kalman filters [13], information filters [14], and Gaussian elimination [1]. Contribution. In this work we advocate the use of higher-level map representations as a tool to enable operation in bandwidth-constrained multi robot scenarios. Maps augmented with objects provide a number of advantages: objects (including planes and other geometric shapes) can be represented in a compact manner and provide a richer and human-understandable description of the environment. Objects are more discriminative, which helps data association and loop-closure detection. Finally, object representations reduce the computational complexity of SLAM by reducing the number of variables (intuitively, we estimate the pose of few objects rather than the position of several 3D points). We propose an approach for Multi Robot Object-based SLAM with two distinctive features. The first is the front-end, which performs accurate object detection using deep learning. Deep learning provides an effective tool to generalize early work on object-based mapping [6] to a large number of object categories. The second is the back-end, which implements distributed pose graph optimization using the distributed Gauss-Seidel method, described in our previous work [3]. We show that the combination of these two techniques reduces the memory requirement and information exchange among robots, allows accurate and parsimonious large-scale mapping, and scales to large teams. 2 Technical Approach Problem Statement. We consider a multi robot system and we denote each robot with a Greek letter, such that the set of robots is [$$ {\varvec{\varOmega }} =\{\alpha , \beta , \gamma , \ldots \}$$]. The goal of each robot is to estimate its own trajectory and the pose of a set of objects in the environment, using local measurements, and leveraging occasional communication with other robots. We model each trajectory as a finite set of poses; the pose assumed by robot [$$\alpha $$] at time i is denoted with [$$\varvec{x}\_{\alpha \_{i}}\in \mathrm {SE}(3) $$]; when convenient, we write [$$\varvec{x}\_{\alpha \_{i}} = ({\varvec{R}}\_{\alpha \_{i}}, \varvec{t}\_{\alpha \_{i}})$$], making explicit that each pose includes a rotation [$${\varvec{R}}\_{\alpha \_{i}} \in \mathrm {SO}(3) $$], and a position [$$\varvec{t}\_{\alpha \_{i}} \in { {\mathbb R}^{3} } $$]. The trajectory of robot [$$\alpha $$] is [$$\varvec{x}\_{\alpha \_{}} = [\varvec{x}\_{\alpha \_{1}},\varvec{x}\_{\alpha \_{2}}, \ldots ]$$]. Similarly, we denote with [$$\varvec{o}\_{\alpha \_{k}}\in \mathrm {SE}(3) $$] the pose of the [$$k^{th}$$] object in the coordinate frame of robot [$$\alpha $$] (Fig. 2). [] Fig. 1. Flowchart of Object based SLAM Object detection and pose estimation. Each robot collects RGBD data using a depth camera, and measures its ego-motion through wheel odometry. In our approach, each RGB frame (from RGBD) is passed to the YOLO object detector [15], which detects objects at 45 frames per second. Compared to object-proposal-based detectors, YOLO is fast, since it avoids the computation burden of extracting object proposals, and is less likely to produce false positives in the background. We fine-tune the YOLO detector on a subset of objects from the BigBird dataset [16]. The training dataset contains the object images in a clean background taken from different viewpoints and labeled images of the same objects taken by a robot in an indoor environment. During testing, we use a probability threshold of 0.3 to avoid false detections. Each detected object bounding box is segmented using the organized point cloud segmentation [17]. The segmented object is matched to the 3D template of the detected object class to estimate its pose. We extract PFHRGB features [18] in the source (object segment) and target (object model) point clouds and register the two point clouds in a Sample Consensus Initial Alignment framework [19]. If we have at least 12 inlier correspondences, GICP (generalized iterative closest point [20]) is performed to further refine the registration and the final transformation is used as the object pose estimate. If less than 12 inlier correspondences are found, the detection is considered to be a false positive. This two-stage process, verifies the detection both semantically and geometrically. Object-based SLAM. If object pose estimation is successful, it is data-associated with other instances already present in the map by finding the object landmark having the same category label within [$$2\sigma $$] distance of the newly detected object. If there are multiple objects with the same label within that distance, the newly detected object is matched to the nearest object instance. If there exists no object having the same label, a new object landmark is created. Before the first rendezvous event, each robot performs standard single-robot SLAM using OmniMapper [8]. Both wheel odometry and relative pose measurements to the observed objects are fed to the SLAM back-end, which is based on pose graph optimization [3]. In particular, if object k at pose [$$\varvec{o}\_{\alpha \_{k}}$$] is seen by the robot [$$\alpha $$] from the pose [$$\varvec{x}\_{\alpha \_{i}}$$], then an object-pose factor is added to the graph: [$$ f\_{op}\left( \varvec{x}\_{\alpha \_{i}}, \varvec{o}\_{\alpha \_{k}}, \varvec{z}\_{ik}\right) \propto \exp \left( -\frac{1}{2}\Vert \text {Log}\left( \varvec{z}\_{ik}^{-1}\varvec{x}\_{\alpha \_{i}}^{-1}\varvec{o}\_{\alpha \_{k}}\right) \Vert \_{\varSigma }^{2}\right) $$] where [$$\varvec{z}\_{ik}$$] is the relative pose estimate returned by the object-pose estimator described above, [$$\varSigma \in { {\mathbb R}^{6 \times 6} } $$] is the corresponding covariance, and [$$\text {Log}$$] is the logarithm map for [$$\mathrm {SE}(3)$$]. A flowchart of the SLAM approach is given in Fig. 1. Robot Communication. During a rendezvous between robots [$$\alpha $$] and [$$\beta $$], robot [$$\alpha $$] communicates the category labels (class) and poses (in robot [$$\alpha $$]’s frame) of all the detected objects to robot [$$\beta $$]. We assume that the initial pose of each robot is known to all the robots, hence, given the initial pose of robot [$$\alpha $$], robot [$$\beta $$] is able to transform the communicated object poses from robot [$$\alpha $$]’s frame to its own frame.¹ For each object in the list communicated by robot [$$\alpha $$], robot [$$\beta $$] finds the nearest object in its map, having the same category label and within [$$2\sigma $$] distance. If such an object exists, it is added to the list of shared objects: this is the set of objects seen by both robots. [] Fig. 2. Factor graph representation of Multi-Robot Object based SLAM. [$$\varvec{x}\_{\alpha \_{i}}$$] and [$$\varvec{x}\_{\beta \_{i}}$$] denote the poses assumed by robot [$$\alpha $$] and [$$\beta $$] at time i respectively. The pose of the [$$k^{th}$$] object in the coordinate frame of robot [$$\alpha $$] and [$$\beta $$] is denoted with [$$\varvec{o}\_{\alpha \_{k}}$$] and [$$\varvec{o}\_{\beta \_{k}}$$] respectively. Green dots shows inter-robot factors whereas orange and purple dots shows intra-robot factors. The list of shared objects contains pairs [$$(\varvec{o}\_{\alpha \_{k}},\varvec{o}\_{\beta \_{l}})$$] and informs the robots that the poses [$$\varvec{o}\_{\alpha \_{k}}$$] and [$$\varvec{o}\_{\beta \_{l}}$$] correspond to the same physical object, observed by the two robots. For this reason, in the optimization we enforce that the relative pose between [$$\varvec{o}\_{\alpha \_{k}}$$] and [$$\varvec{o}\_{\beta \_{l}}$$] is zero. We do that by adding an object-object factor to the pose graph for each pair [$$(\varvec{o}\_{\alpha \_{k}},\varvec{o}\_{\beta \_{l}})$$] in the shared object list: [$$ f\_{oo}\left( \varvec{o}\_{\beta \_{l}}, \varvec{o}\_{\alpha \_{k}}\right) \propto \exp \left( -\frac{1}{2}\Vert \text {Log}\left( \varvec{o}\_{\beta \_{l}}^{-1} \varvec{o}\_{\alpha \_{k}}\right) \Vert \_{\varLambda }^{2}\right) $$] where [$$\varLambda \in { {\mathbb R}^{6 \times 6} } $$] specifies the confidence in the data association among the shared set of objects. We remark that, while before the first rendezvous the robots [$$\alpha $$] and [$$\beta $$] have different reference frames, the object-object factors enforce both robots to have a single shared reference frame, facilitating future data association. Distributed Optimization. Overall, our approach uses two types of measurements: intra-robot and inter-robot measurements. The intra-robot measurements consists of the odometry measurements which constrain consecutive robot poses (e.g., [$$\varvec{x}\_{\alpha \_{i}}$$] and [$$\varvec{x}\_{\alpha \_{i+1}}$$]) and pose-object measurements which constrains robot poses with the corresponding visible object landmarks (e.g., [$$\varvec{x}\_{\alpha \_{i}}$$] and [$$\varvec{o}\_{\alpha \_{k}}$$]). The inter-robot measurements are the ones relating the objects observed by different robots. According to our previous terminology, an inter-robot measurement is associated to each pair in the shared object list. Figure 2 shows the pose graph containing intra and inter-robot measurements. Given these measurements, we use the distributed Gauss-Seidel (DGS) algorithm [3] to estimate the 3D trajectories of the robots along with the poses of the observed objects. 3 Results We evaluate our approach in large simulations (Sect. 3.1) and field tests (Sect. 3.2). The results demonstrate that the proposed approach is accurate, scalable, robust to noise, and requires less memory and communication bandwidth. We evaluate the accuracy of our approach by comparing it against the standard centralized Gauss-Newton method [21]. In particular, we use three accuracy metrics: (i) the cost at the final estimate, (ii) the average translation error (ATE\*) and (iii) average rotation error (ARE\*) of the robot and landmark poses. Average Translation error (ATE\*). Similar to the formulation by Sturm et al. [22], the average translation error measures the absolute distance between the trajectory and object poses estimated by our approach versus the centralized Gauss-Newton (GN) method. The trajectory ATE\* is defined as follows: [$$\begin{aligned} ATE^\*= \left( \frac{1}{\sum \_{\alpha \in \varOmega } n\_\alpha } \sum \_{\alpha \in \varOmega } \sum \_{i=1}^{n\_\alpha }\Vert \varvec{t}\_{\alpha \_{i}} - \varvec{t}\_{\alpha \_{i}}^\* \Vert ^{2} \right) ^{\frac{1}{2}} \end{aligned}$$] (1) where [$$\varvec{t}\_{\alpha \_{i}}$$] is the position estimate for robot [$$\alpha $$] at time i, [$$\varvec{t}\_{\alpha \_{i}}^\*$$] is the corresponding estimate from GN, and [$$n\_\alpha $$] is the number of poses in the trajectory of [$$\alpha $$]. A similar definition holds for the object positions. Average Rotation error (ARE\*). The average rotation error is computed by evaluating the angular mismatch between the (trajectory and objects) rotations produced by the proposed approach versus a centralized GN method: [$$\begin{aligned} ARE^\*= \left( \frac{1}{\sum \_{\alpha \in \varOmega } n\_\alpha } \sum \_{\alpha \in \varOmega } \sum \_{i=1}^{n\_\alpha } \Vert \text {Log} \left( ({\varvec{R}}\_{\alpha \_{i}}^\*)^{\mathsf {T}}{\varvec{R}}\_{\alpha \_{i}} \right) \Vert ^{2} \right) ^{\frac{1}{2}} \end{aligned}$$] (2) where [$${\varvec{R}}\_{\alpha \_{i}}$$] is the rotation estimate for robot [$$\alpha $$] at time i, [$${\varvec{R}}\_{\alpha \_{i}}^\*$$] is the corresponding estimate from GN. A similar definition holds for the object rotations. Our approach is based on the DGS method and is iterative in nature. Therefore, its accuracy depends on the number of iterations, which in turns depends on the choice of the stopping conditions, see [3] for details. In the following, we present results for two different choices of the stopping condition [$$\eta $$]. 3.1 Simulation Experiments In this section we characterize the performance of the proposed approach in terms of scalability in the number of robots and sensitivity to noise. We test the approach in two scenarios (a) 25 Chairs and (b) House, which we simulated in Gazebo. In the 25 Chairs scenario, we placed 25 chairs as objects on a grid, with each chair placed at a random angle. In the House scenario, we placed furniture as objects in order to simulate an indoor living room environment. Figure 3 shows the two scenarios. Unless specified otherwise, we generate measurement noise from a zero-mean Gaussian distribution with standard deviation [$$\sigma \_R = 5\,^\circ $$] for the rotations and [$$\sigma \_t = 0.2$$] m for the translations. Six robots are used by default. Results are averaged over 10 Monte Carlo runs. [] Fig. 3. Shows the screenshot of 25 Chair and House scenarios simulated in Gazebo. [] Fig. 4. Shows the trajectories of the six robots and object locations (shows as dots) estimated using centralized mapping and multi-robot mapping for 25 Chairs (top) and House scenario (bottom). Figure 4 show the comparison between the object locations and trajectories estimated using multi-robot mapping and centralized mapping for two scenarios. Videos showing the map building for the two scenarios are available at: https://​youtu.​be/​nXJamypPvVY and https://​youtu.​be/​nYm2sSHuGjo. Table 1. Number of iterations, cost, ATE\* and ARE\* of our approach as compared to centralized Gauss-Newton method for increasing number of robots. ATE\* and ARE\* are measured using [$$\eta \!=\!10^{-1}$$] as stopping condition. [TABLE] Accuracy in the Number of Robots. Table 1 reports the number of iterations and our accuracy metrics (cost, ATE\*, ARE\*) for increasing number of robots. The table confirms that the distributed approach is nearly as accurate as the centralized Gauss-Newton method and the number of iterations does not increase with increasing number of robots, making our approach scalable to large teams. Usually, few tens of iterations suffice to reach an accurate estimate. Table 2. Number of iterations, cost, ATE\* and ARE\* of our approach as compared to centralized Gauss-Newton approach for increasing measurement noise. ATE\* and ARE\* are measured using [$$\eta \!=\!10^{-1}$$] as stopping condition. [TABLE] Sensitivity to Measurement Noise. We further test the accuracy of our approach by evaluating the number of iterations, the cost, the ATE\* and the ARE\* for increasing levels of noise. Table 2 shows that our approach is able to replicate the accuracy of the centralized Gauss-Newton method, regardless of the noise level. [] Fig. 5. Objects from BigBird dataset used in Field Experiments [] Fig. 6. (Left) Clearpath Jackal robot used for the field tests: platform and sensor layout; (right) snapshot of the test facility and the Jackal robots. 3.2 Field Experiments We tested our approach on field data collected by two Jackal robots (Fig. 6) moving in a MOUT (Military Operations in Urban Terrain) facility. We scattered the environment with a set of objects from the BigBird dataset [16], shown in Fig. 5. Each robot is equipped with an Asus Xtion sensor and uses wheel odometry to measure its ego-motion. We evaluated our approach in two different scenarios, the stadium and the house. We did two runs inside the stadium (Stadium-1 & Stadium-2) and one run in the house with objects randomly spread along the robot trajectories. Stadium scenario datasets were collected in an indoor basketball stadium with the robot trajectories bounded in a roughly rectangular area. House scenario dataset was collected around the living room and kitchen area of a house. [] Fig. 7. Shows YOLO object detection snapshots in three difference scenes, (l to r) stadium, house, UW scene 2. Object Detection. We used 12 objects from the BigBird dataset in all three runs. The two-stage process of object detection (semantic verification) followed by pose estimation (geometric verification) ensured that we do not add false positive detections. Our current distributed optimization technique (DGS) is not robust to outliers. The detection thresholds can be further relaxed when using robust pose graph optimization techniques. In the first run (Stadium-1), 6 objects were added to the map out of 12 objects kept in the environment. Similarly 5 objects were detected in the other two runs. Figure 7 shows the bounding box snapshots of the detected object in three different scenes. Videos showing YOLO object detection results on UW Scenes2 dataset [23] is available at https://​youtu.​be/​urZiIJK2IYk and https://​youtu.​be/​-F6JpVmOrc0. Table 3. Memory and communication requirements for our object based approach (Obj) as compared to Point cloud based approach (PCD) on field data. +-----------+------------------+---------+---------------------+---------+ | Scenario | Avg. Per-Robot | | Avg. Comm. | | +:==========+:=================+:========+:====================+:========+ | | Memory Req. (MB) | | Bandwidth Req. (MB) | | +-----------+------------------+---------+---------------------+---------+ | | PCD | Obj | PCD | Obj | +-----------+------------------+---------+---------------------+---------+ | Stadium-1 | 1.2e+03 | 1.9e+00 | 1.9e+01 | 1.5e−05 | +-----------+------------------+---------+---------------------+---------+ | Stadium-2 | 1.4e+03 | 1.9e+00 | 1.4e+01 | 1.1e−05 | +-----------+------------------+---------+---------------------+---------+ | House | 2.1e+03 | 1.9e+00 | 1.6e+01 | 1.3e−05 | +-----------+------------------+---------+---------------------+---------+ Memory Requirements. Table 3 compares the average memory requirement per robot to store a dense point cloud map (PCD) with respect to storing a object-based map (Obj). The table also compares the average communication requirements in the case of dense point cloud map and object-based map. Per-robot memory requirement in the case of dense point cloud is computed as [$$n\_fKC$$] where [$$n\_f$$] is the number of frames, K is the number of points per frame and C is the memory required to store each point. In the case of object level map, it is computed as [$$n\_oPC$$] where [$$n\_o$$] is the number of object models and P is the average number of points in each object model. Table 3 shows that, as expected, the per-robot memory requirement is orders of magnitude smaller with our object-based map as compared to point-cloud-based maps. When using point clouds, the robots are required sending at least one frame at every rendezvous to estimate their relative pose. So the average communication for dense point cloud map is computed as [$$n\_c K C$$] where [$$n\_c$$] is the number of rendezvous, K is the number of points per frame and C is the memory required to send each point. Communication in the case of our object-based map requires sending object category and object pose; a upper bound can be computed as [$$n\_oL$$] where [$$n\_o$$] is the number of objects and L is the memory required to store category label and pose of an object. Table 3 confirms that our approach provides a remarkable advantage in terms of communication burden as it requires transmitting 6 orders of magnitude less than a point-cloud-based approach. Accuracy. Figure 8 shows the trajectories of the two robots in three runs. The figure compares our approach and the corresponding centralized estimate. Quantitative results are given in Tables 3 and 4, which reports the cost attained by the our approach, the number of iterations, ATE\*, ARE\* as compared to the centralized approach. The table confirms that the distributed approach is nearly as accurate as the centralized Gauss-Newton method and requires very few iterations to compute a good estimate. [] Fig. 8. Field tests: estimated trajectories for the our algorithm (distributed Gauss-Seidel) and for the centralized Gauss-Newton method [21]. Trajectories of the two robots are shown in red and blue. Table 4. Number of iterations, cost, ATE\* and ARE\* of our approach as compared to centralized Gauss-Newton method for Field data [TABLE] 4 Main Experimental Insights In our previous work [3], we proposed a distributed Gauss-Seidel method, which reduces the communication burden of distributed SLAM from quadratic to linear in the number of locations co-observed by the robots. However, the work [3], as most related works, requires the exchange of point clouds among the robots, to estimate relative poses during rendezvous. This communication burden is unnecessary, as it leads to exchanging a large amount of uninformative points, and quickly becomes impractical in presence of bandwidth constraints. In this paper we address this issue by using an object-based representation. Objects provide a suitable abstraction level, and provide a natural tool to compress large point clouds into a semantically meaningful compact representation. In our system, the robots perform local computation to detect objects and compute their pose. We leverage recent progress in object detection using deep learning: this allows us to reliably detect objects in RGB images at high frame rate. Then, during rendezvous the robots only need to exchange the observed object instances and the measured object poses. This allows the robots to greatly minimize the amount of data exchanged with the teammates. Experimental evidence shows that our approach leads to a remarkable reduction in the memory footprint (3 orders of magnitude less) and in the communication requirements (6 orders of magnitude less communication), enabling operation in severely bandwidth-constrained scenarios. The experiments also show that our object-based distributed SLAM approach is as accurate as a standard centralized solver and is able to tolerate a large amount of measurement noise. 5 Conclusions and Future Work We proposed a Multi Robot Object-based SLAM approach that uses object landmarks in a multi robot mapping framework. We showed that this approach (i) reduces the memory requirement and information exchange among robots, (ii) is as accurate as the centralized estimate, (iii) scales well to large number of robots and (iv) is resistant to noise. Our current approach assumes that a model of each observed objects is known in advance. However it can be challenging to store a large number of object models, and to account for intra-class variations. As a future work, we plan to extend our approach to the case where object models are not previously known (at an instance level) and instead object shapes are jointly optimized within our SLAM framework. Another future direction is to improve the robustness of the current pipeline using a distributed algorithm for outlier rejection. References 1. Cunningham, A., Indelman, V., Dellaert, F.: DDF-SAM 2.0: consistent distributed smoothing and mapping. In: IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, May 2013 2. Choudhary, S., Trevor, A.J.B., Christensen, H.I., Dellaert, F.: SLAM with object discovery, modeling and mapping. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014, pp. 1018–1025 (2014) 3. Choudhary, S., Carlone, L., Nieto, C., Rogers, J., Christensen, H., Dellaert, F.: Distributed trajectory estimation with privacy and communication constraints: a two-stage distributed gauss-seidel approach. In: IEEE International Conference on Robotics and Automation (ICRA) (2016) 4. Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: Monoslam: Real-time single camera slam. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)CrossRef 5. Kuipers, B.: The spatial semantic hierarchy. Artif. Intell. 119, 191–233 (2000)MathSciNetCrossRefMATH 6. Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H., Davison, A.J.: SLAM++: Simultaneous localisation and mapping at the level of objects. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013 7. Rogers, J.G., Trevor, A.J.B., Nieto-Granda, C., Christensen, H.I.: Simultaneous localization and mapping with learned object recognition and semantic data association. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 1264–1270, September 2011 8. Trevor, A.J.B., Rogers III., J.G., Christensen, H.I.: Planar surface slam with 3d and 2d sensors. In: IEEE International Conference on Robotics and Automation (ICRA), St. Paul, MN. IEEE, May 2012 9. Kostavelis, I., Gasteratos, A.: Semantic mapping for mobile robotics tasks: a survey. Rob. Auton. Syst. 66, 86–103 (2015)CrossRef 10. Paull, L., Huang, G., Seto, M., Leonard, J.: Communication-constrained multi-AUV cooperative SLAM. In: IEEE International Conference on Robotics and Automation (ICRA) (2015) 11. Indelman, V., Gurfil, P., Rivlin, E., Rotstein, H.: Graph-based distributed cooperative navigation for a general multi-robot measurement model. Int. J. Rob. Res. 31(9), 1057–1080 (2012)CrossRef 12. Dong, J., Nelson, E., Indelman, V., Michael, N., Dellaert, F.: Distributed real-time cooperative localization and mapping using an uncertainty-aware expectation maximization approach. In: IEEE International Conference oF Robotics and Automation (ICRA) (2015) 13. Roumeliotis, S., Bekey, G.: Distributed multi-robot localization. IEEE Trans. Rob. Autom. (2002) 14. Thrun, S., Liu, Y.: Multi-robot SLAM with sparse extended information filers. In: Dario, P., Chatila, R. (eds.) Robotics Research. STAR, vol. 15, pp. 254–266. Springer, Heidelberg (2005). doi:10.​1007/​11008941\_​27 15. Redmon, J., Divvala, S.K., Girshick, R.B., Farhadi, A.: You only look once: Unified, real-time object detection. CoRR abs/1506.02640 (2015) 16. Singh, A., Sha, J., Narayan, K.S., Achim, T., Abbeel, P.: Bigbird: a large-scale 3d database of object instances. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 509–516, May 2014 17. Trevor, A.J.B., Gedikli, S., Rusu, R.B., Christensen, H.I.: Efficient organized point cloud segmentation with connected components. In: Semantic Perception Mapping and Exploration (SPME), May 2013 18. Rusu, R.B., Marton, Z.C., Blodow, N., Dolha, M.E., Beetz, M.: Functional object mapping of kitchen environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2008) 19. Rusu, R.B.: Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments. PhD thesis, Technische Universität München (2009) 20. Segal, A., Haehnel, D., Thrun, S.: Generalized-ICP. In: Proceedings of Robotics: Science and Systems, Seattle, USA, June 2009 21. Dellaert, F.: Factor graphs and GTSAM: A hands-on introduction. Technical Report GT-RIM-CP&R-2012-002, Georgia Institute of Technology, September 2012 22. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D slam systems. In: Proceedings of the International Conference on Intelligent Robot Systems (IROS), October 2012 23. Lai, K., Bo, L., Ren, X., Fox, D.: A large-scale hierarchical multi-view RGB-D object dataset. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1817–1824, May 2011 Footnotes 1 The knowledge of the initial pose is only used to facilitate data association but it is not actually used during pose graph optimization. We believe that this assumption can be easily relaxed but for space reasons we leave this task to future work.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_64 Particle Filter Localization on Continuous Occupancy Maps Alberto Yukinobu Hata¹  , Denis Fernando Wolf¹   and Fabio Tozeto Ramos²   (1) Mobile Robotics Laboratory, University of São Paulo, São Paulo, Brazil (2) School of Information Technologies, University of Sydney, Sydney, Australia     Alberto Yukinobu Hata (Corresponding author) Email: hata@icmc.usp.br   Denis Fernando Wolf Email: denis@icmc.usp.br   Fabio Tozeto Ramos Email: fabio.ramos@sydney.edu.au Abstract Occupancy grid maps have been widely used for robot localization. Despite the popularity, this representation has some limitations, such as requirement of discretization of the environment, assumption of independence between grid cells and necessity of dense sensor data. Suppressing these limitations can improve the localization performance, but requires a different representation of the environment. Gaussian process occupancy map (GPOM) is a novel representation based on Gaussian Process that enables the construction of continuous maps (i.e. without discretization) using few laser measurements. This paper addresses a new localization method that uses GPOM to estimate the robot pose in areas not directly observed during mapping and generally provides higher accuracy compared to occupancy grid maps localization. Specifically, we devised a novel likelihood model based on the multivariate normal probability density function and adapted the particle filter localization method to work with GPOM. Experiments showed localization errors more than three times lower in comparison with particle filter localization using occupancy grid maps. Keywords Gaussian Process Occupancy MapsGaussian ProcessParticle Filter LocalizationOccupancy Grid MapSparse laser sensor data 1 Introduction Localization is an important problem in mobile robotics because of its direct impact on other robot tasks, such as path planning and environment mapping. Most of the localization methods use a map of the environment as a reference to correct the robot position. The basic idea is to find the position that maximizes the correspondence between the current sensor data and the map. The occupancy grid maps (OGMs) have been widely applied to this task due to its relative simplicity to build and speed [10]. Despite the popularity of OGMs, they have some drawbacks. First, this representation requires the discretization of the environment, thus the localization precision is limited to the chosen map resolution. Second, independence between grid cells is assumed. Therefore, the occupancy information of neighboring cells is not taken into account during the construction of the map. As a result, for a detailed map, OGMs require dense sensor measurements and intense exploration of the environment. To overcome the limitations of OGM, O’Callaghan and Ramos [6] proposed a novel mapping method named Gaussian Process Occupancy Map (GPOM) that uses Gaussian Process (GP) to enable continuous representation of the environment using a dependent model of the sensor measurements. With GPs, the occupancy status of any place in the environment can be estimated; it is possible to reconstruct non-observed areas, even from highly sparse sensor data. In order to relax the requirement of a dense, accurate and high resolution maps for localization, in this paper, we propose the use of GPOM representation for robot localization. Another advantage of our approach is the possibility to use low-resolution sensors which reduces the size of datasets allowing for lower cost range sensors. As the existing localization methods are designed to work with discrete maps, here we modify the popular Particle Filter localization (PFL) to support continuous occupancy maps (GPOMs) instead. Specifically, a novel observation likelihood model is proposed to update the particles. The main advantage of our approach is the use of occupancy uncertainty information returned by the GP to incorporate the degree of accuracy of the map in the likelihood model. In this way, the association of GPOM with PFL makes the localization more robust to noisy measurements and enables an accurate localization in areas not observed in detail during mapping. Few studies have explored the use of alternative map representations for robot localization. For instance, Yang and Wang [10] proposed OGM maps that include the probability of each cell containing a moving obstacle. Then, this map was used for localization in dynamic environments. Despite the lack of works covering localization in continuous occupancy maps, there are others that apply GP learning for localization. In [2], the authors used GP for SLAM problem based on WiFi signal. [4] proposed the integration of GP with Bayesian filters for blimp tracking. The work of [7] simulated a dense sensor measurement by training a GP model from sparse laser data. In this, a new GP model is trained for each new scan. A similar approach was adopted in [1], but cameras were used to extract the features to build the GP model. Differently from previous works, we are applying GPs to generate a more detailed representation of the environment and enable the robot to precisely localize at any location of the map with a reduced number of laser beams. 2 Gaussian Process Occupancy Maps (GPOMs) A GP is a multivariate Gaussian distribution modeling the function space of a dataset [$$\mathcal{D}=\{\mathbf {x}\_i, y\_i\}\_{i=1}^N$$], where [$$\mathbf {x}\_i \in \mathbb {R}^D$$] and [$$y\_i \in \mathbb {R}$$]. The model is fully specified by a mean function [$$\mu (\mathbf {x})$$], and a covariance function [$$k(\mathbf {x},\mathbf {x'})$$]. In the mapping context, [$$\mathbf {x}\_i$$] and [$$y\_i$$] corresponds respectively to a laser beam and its occupancy status. In GPs, given a query data point [$$\mathbf {x\_\*}$$] and the training dataset [$$\mathcal D$$], the posterior [$$y\_\*$$] is also a Gaussian: [$$\begin{aligned} p(y\_\*|\mathcal{D}, \mathbf {x}\_\*) = \mathcal {N}(\mu (\mathbf {x\_\*}), \sigma (\mathbf {x\_\*})). \end{aligned}$$] (1) The mean and the variance of the posterior are represented respectively by [$$\mu (\mathbf {x\_\*})$$] and [$$\sigma (\mathbf {x\_\*})$$], which are obtained by: [$$\begin{aligned} \mu (\mathbf {x\_\*})&= k(\mathbf {x\_\*},\mathbf {x})^T[k(\mathbf {x},\mathbf {x})+\sigma ^2\_n I]^{-1}y,\end{aligned}$$] (2) [$$\begin{aligned} \sigma (\mathbf {x\_\*})&= k(\mathbf {x\_\*},\mathbf {x\_\*})-k(\mathbf {x\_\*},\mathbf {x})[k(\mathbf {x},\mathbf {x})+\sigma ^2\_n I]^{-1}k(\mathbf {x\_\*},\mathbf {x}), \end{aligned}$$] (3) where [$$\sigma \_n$$] is the global noise value. The Gaussian Process of [$$\mathbf {x\_\*}$$] is denoted as [$$\mathcal {GP}(\mu (\mathbf {x\_\*}), \sigma (\mathbf {x\_\*}))$$]. For GPOM, generally the Matérn 3 / 2 covariance function is employed [6] because it is less smooth compared to the squared exponential covariance function, which is more convenient to represent the occupancy values in space: [$$\begin{aligned} k(\mathbf {x},\mathbf {x'}) = \sigma \_f\left( 1+\frac{\sqrt{3}\left| \mathbf {x}-\mathbf {x'}\right| }{l}\right) \exp \left( -\frac{\sqrt{3}\left| \mathbf {x}-\mathbf {x'}\right| }{l}\right) , \end{aligned}$$] (4) where [$$\sigma \_f$$] and l are respectively, the signal variance and length-scale of the covariance function. The main advantage of GP is its capacity of learning hyper-parameters for both mean and covariance functions by maximizing the marginal likelihood of the data. The hyperparameters of the covariance function, given by [$$\theta = \{\sigma \_f, l\}$$] are obtained through the maximization of the log marginal likelihood function [8]: [$$\begin{aligned} \log p(y|\mathbf {x},\theta ) = \frac{1}{2}y^TK^{-1}y-\frac{1}{2}\log |K|-\frac{n}{2}\log 2 \pi , \end{aligned}$$] (5) where K is the covariance matrix of the training dataset [$$\mathcal D$$] with size n. GPs were originally proposed to solve regression problems. Therefore, adaptations in the original formulation were made for classification problems, as in the case of GPOM. For GP classification, the mean value resulted from regression is squashed into [$$\left[ 0,1\right] $$] interval using a sigmoid function. For this, the probabilistic least squares function [8] is used: [$$\begin{aligned} p(occupancy|\mathcal{D}, \mathbf {x}\_\*) = \varPhi \left( \frac{\alpha \mu (\mathbf {x\_\*})+\beta }{1+\alpha ^2\sigma (\mathbf {x\_\*})^2}\right) , \end{aligned}$$] (6) where [$$\varPhi $$] is the cumulative Gaussian distribution and the parameters [$$\alpha $$] and [$$\beta $$] are obtained by leave-one-out cross-validation. 3 GPOMs for Large Environments Considering the [$$\mathcal {O}(n^3)$$] cost of the GP prediction, the construction of GPOMs for large areas involving millions of sensor measurements is computationally unfeasible. The major bottleneck in the Gaussian Processes regression is associated to the matrix inversion [$$K^{-1}$$], which involves the solution of a large linear system. In order to reduce the complexity, Cholesky decomposition can be applied to update the inverted matrix as new measurements are obtained. This reduces the matrix inversion complexity from [$$\mathcal {O}(n^3)$$] to [$$\mathcal {O}(n^2)$$]. Despite reducing the GP complexity, depending on the amount of measurements, Cholesky decomposition alone can be insufficient for suitable computation times. Therefore, two other approaches are also employed: (a) reduced training dataset and (b) GP committees. These methods are described in the following sections. 3.1 Information Theoretic Compression of Training Data When gathering sensor data there are generally redundant measurements that could be eliminated to speed up the computation. Unnecessary data can be discarded by adapting the information-theoretic compression of laser data method proposed in [5]. The idea behind is to evaluate the mutual information between the measurement and the existing dataset. Only laser readings that reduce the uncertainty about the environment the most are kept in the dataset. 3.2 Mixture of Gaussian Processes Instead of generating a single map, the environment can be split into smaller regions and then produce a set of GPOMs. The strategy of using several GPs is named mixture of GPs and its application for mapping was proposed in [3]. The first step of the mixture of GPOMs is to cluster the data (training dataset) according to a distance measure. Here, k-means clustering was employed. Given the maximum number of measurements s that each GP expert must handle, the number of clusters is set as [$$k \ge \frac{n}{s}$$]. After clustering, each measurement will be associated with a centroid [$$\{c\_{i=1\cdots k}\}$$] and a GP expert [$$\{\varepsilon \_{i=1\cdots k}\}$$]. When building a dense map, a set of test points [$$D = \{d\_{j=1\cdots m}\}$$] must be evaluated. A gating network evaluates which expert should be chosen to infer the occupancy of test points. For a test point [$$d\_j$$], we associate the expert [$$\varepsilon \_i$$] whose corresponding cluster [$$c\_i$$] is the closest to [$$d\_j$$]. 4 Particle Filter Localization for GPOM After generating the GPOM, particle filter localization (PFL) uses the map information to estimate the robot position. The standard PFL algorithm starts by randomly distributing particles over the environment. The particle set is represented by [$$\mathbf {S}\_k = \left\{ \mathbf {s}^i\_k\right\} \_{i=1}^n$$], where n is the number of particles and k represents the time stamp. Each particle [$$\mathbf {s}^i\_k$$] stores the position [$$\mathbf {x}^i\_k = \{x^i\_k,y^i\_k,\theta ^i\_k\}$$] and the importance weight [$$w^i\_k$$] of the particle. For each PFL iteration, [$$\mathbf {S}\_k$$] is updated through an auxiliary particle set [$$\mathbf {S}'\_k$$]. The following steps comprise the particle set update: 1. 1. Perform motion update from the probability [$$p(\mathbf {x}^i\_k|\mathbf {x}^i\_{k-1}, \mathbf {u}\_{k})$$], where [$$\mathbf {u}$$] is the robot motion.   2. 2. Compute the measurement update using the probability [$$p(\mathbf {z}\_k|\mathbf {x}^i\_k,m)$$], which corresponds to the particle weight [$$w^i\_k$$] and also known as measurement likelihood function [$$\begin{aligned} w^i\_k = p(\mathbf {z}\_k|\mathbf {x}^i\_k,m), \end{aligned}$$] (7) where, [$$\mathbf {z}$$] and m correspond to the sensor measurement and the environment map, respectively.   3. 3. Choose randomly with replacement n particles from [$$\mathbf {S}'\_k$$]. Particles of higher weights have higher probability to be selected. The chosen particles replace the current [$$\mathbf {S}\_k$$] set. The particle with the highest weight in [$$\mathbf {S}\_k$$] is chosen to represent the robot position.   The second step of PFL is the most crucial, because it directly affects the localization robustness. Here we devise a new likelihood function to use together with GPOM. In our measurement likelihood function, occupancy and geometric information are used. To calculate the occupancy, first the laser end point pose is calculated from the laser range and the particle’s position: [$$\begin{aligned} \mathbf {p}^i\_k = \{x^i\_k + r^j\_k\cos (a^j\_k + \theta ^i\_k); y^i\_k + r^j\_k\sin (a^j\_k + \theta ^i\_k)\}, \end{aligned}$$] (8) where [$$r^j\_k$$] and [$$a^j\_k$$] denote the measurement [$$\mathbf {z}\_k$$] range distances and beam angles, respectively, and j denotes the beam index. From a particle [$$\mathbf {s}^i\_k$$], occupancy mean [$$\varvec{\mu \_o}$$] and variance [$$\varvec{\sigma \_o}$$] is obtained from each laser beam end point through GP prediction: [$$\begin{aligned} \varvec{occupancy}^j\_k = \mathcal {GP}(\varvec{\mu \_o}(\mathbf {p}^i\_k),\varvec{\sigma \_o}(\mathbf {p}^i\_k)). \end{aligned}$$] (9) However, using just the occupancy information can easily lead to ambiguous positions. Therefore, it is also necessary to include geometric information of the laser measurements into the likelihood model. We used the distance [$$\varvec{\mu \_r}$$] between the particle to the first point along the laser beam direction which has occupancy mean higher than 0.99. Essentially, a laser measurement is simulated in the particles’ pose to determine how far the particle is to the obstacle. The variance of the distance [$$\varvec{\sigma \_r}$$] is given by the squared difference from the range distance returned by the sensor and [$$\varvec{\mu \_r}$$]. Thus, the following array of mean and variance values are obtained from a single particle [$$\mathbf {s}^i\_k$$], given the map and measurement information: [$$\begin{aligned} \varvec{\mu }^i\_k = \begin{bmatrix} \varvec{\mu }^i\_{o}\\ \varvec{\mu }^i\_{r} \end{bmatrix}\_k \;\;\;\;\;\; \varvec{\sigma }^i\_k = \begin{bmatrix} \varvec{\sigma }^i\_{o}\\ \varvec{\sigma }^i\_{r} \end{bmatrix}\_k, \end{aligned}$$] (10) where [$$\varvec{\mu }^i\_{r}$$] and [$$\varvec{\sigma }^i\_{r}$$] represent respectively the mean and variance of range measurement error of the particle [$$\mathbf {s}^i\_k$$]. To calculate the observation likelihood, the normal distribution of the current laser measurement is compared with the normal distribution obtained from the occupancy and geometric information of a particle. This is done by connecting the measurement and the particle information in the multivariate normal probability density function: [$$\begin{aligned} p(\mathbf {z}\_k | \mathbf {x}^i\_k, m, \varvec{\mu }^i\_k, \varvec{\varSigma }^i\_k)^ = \frac{1}{2\pi ^\frac{N}{2} |\varvec{\varSigma }|^{\frac{1}{2}}} \exp ^{\frac{1}{2} (z\_k - \varvec{\mu })^T \varvec{\varSigma }^{-1} (z\_k - \varvec{\mu })}, \end{aligned}$$] (11) where, [$$\varvec{\varSigma }$$] is the covariance matrix with [$$\varvec{\sigma }$$] as the diagonal values and N is the [$$\varvec{\mu }$$] array size. For numerical stability, [$$\log (p(\mathbf {z}\_k | \mathbf {x}^i\_k, m))$$] is used to update the particle weight [$$w^i\_k$$]. Here, [$$z\_k$$] is an array formed by the beam end point occupancy values (sequence of ones) and the ranges returned by the laser sensor ([$$r^j\_k$$]). 5 Results For the validation experiments, we used laser data collected in simulated and in real environments. In the simulated scenario, we tested the global localization performance in maps with areas that was intentionally not observed and using sparse laser readings. Experiments with real robots and noisy laser measurements in larger areas were conducted using publicly available datasets from Freiburg¹ and Rawseeds² repositories. From the Freiburg repository we used the Seattle and Belgioioso datasets, while from Rawseeds we used the Bicocca dataset. In all experiments, distinct measurement sets were used for mapping and localization tasks. [] Fig. 1. Sparse laser dataset (a) used to build the OGM (b) and GPOM (c) representations. Few measurements were intentionally gathered from the central corridor. GPOM could recreate the corridor area and reconstruct objects of the scenario (e.g. boxes on the right side), as opposite to OGM. In the mapping stage, OGMs were generated with [$$0.10\ \mathrm {m}$$] resolution cell which can provide fine details of the scene. We used standard GPOM for all scenarios except the Bicocca dataset which used the strategy for large maps. Here the combination of PFL with OGM is named OGM-PFL and the combination of PFL with GPOM is named GPOM-PFL. We used the multivariate normal probability density function of Eq. 11 to evaluate the likelihood in the GPOM-PFL. In all experiments we performed global localization using 1000 particles. The localization results were evaluated through the Absolute Trajectory Error (ATE) metric [9] that calculates the Euclidean distance between the estimated and ground truth poses. For the simulated dataset, we first built the maps using the training data, composed by 22 measurements with 17 laser beams each (less than [$$\frac{1}{10}$$] of the standard laser range sensors). The obtained OGM and GPOM are illustrated in Fig. 1. Higher occupancy probabilities are associated to values closer to 1.0. Even with sparser measurements, GPOM could reconstruct the scenario with richer details than OGM. For example, it could estimate the occupancy of non-observed areas (the narrow corridor) and reconstruct some objects (boxes on the right side). [] Fig. 2. Estimated poses (blue line) compared to ground truth (green line). Particles’ variance is represented by the ellipses. The OGM-PFL (b) variance increases around the corridor, while the GPOM-PFL (c) variance stay low. [] Fig. 3. Mapping results of the Freigburg dataset. (a)-(b) occupancy grid maps and (c)-(d) Gaussian process occupancy maps. Table 1. ATE, standard deviation and orientation error of the localization experiments. The GPOM solution outperformed OGM in all scenarios. +-------------------+-------------------------------+------------------------------------+-------------------+ | Map | Mean ATE ([$$\mathrm {m}$$])  | S. Dev. ATE ([$$\mathrm {m}^2$$])  | Ori. Error (rad)  | +:==================+:==============================+:===================================+:==================+ | OGM - Simulated | 0.1095 | 0.9987 | 0.0238 | +-------------------+-------------------------------+------------------------------------+-------------------+ | GPOM - Simulated | 0.0376 | 0.3330 | 0.0073 | +-------------------+-------------------------------+------------------------------------+-------------------+ | OGM - Seattle | 0.3645 | 2.6835 | 0.0404 | +-------------------+-------------------------------+------------------------------------+-------------------+ | GPOM - Seattle | 0.2612 | 2.9229 | 0.0351 | +-------------------+-------------------------------+------------------------------------+-------------------+ | OGM - Belgioioso | 0.1939 | 2.1926 | 0.0304 | +-------------------+-------------------------------+------------------------------------+-------------------+ | GPOM - Belgioioso | 0.1419 | 1.5198 | 0.0232 | +-------------------+-------------------------------+------------------------------------+-------------------+ | OGM - Bicocca | 0.7410 | 5.5250 | 1.9680 | +-------------------+-------------------------------+------------------------------------+-------------------+ | GPOM - Bicocca | 0.1194 | 0.2867 | 0.0252 | +-------------------+-------------------------------+------------------------------------+-------------------+ Using these maps, the OGM-PFL and GPOM-PFL methods were evaluated using the test data shown in Fig. 2(a). This dataset contains 221 scans, each with 80 laser beams. The ATE and absolute orientation error values are in Table 1. GPOM-PFL delivered results almost three times lower than OGM-PFL. Figure 2(b–c) shows the estimated poses (blue line) and particle variance (ellipses) for both approaches. We can notice that OGM-PFL produces higher errors around the narrow corridor, as opposite to GPOM-PFL. For the Seattle and Belgioioso datasets, we performed the same mapping and localization experiments. These environments are larger compared to the simulated dataset and can test the mapping and localization performance using noisy laser measurements. During mapping, 241 and 132 measurements from Seattle and Belgioioso datasets were used, respectively. Each measurement contains 22 laser beams. Mapping results are presented in Fig. 3. The GP based mapping reconstructed fine details of a relatively large scenario using just few laser measurements. Localization was performed using 1 degree resolution laser sensor. Numerical results for localization are presented in Table 1. Note that GPOM-PFL delivered lower ATE and orientation errors. As the Bicocca dataset is the largest environments, GP calculation becomes very time consuming. To address this, a mixture of 10 GP experts was used to reduce the computational cost. After information theoretic compression, the mapping dataset is formed by 796 poses and a total of 8327 laser beams. The obtained OGM and GPOM are presented in Fig. 4. The localization result for the Bicocca dataset is presented in Table 1. It is possible to notice a much lower ATE and orientation errors for GPOM-PFL. Figure 4(c–d) illustrate the estimated pose for each approach. Differently than OGM-PFL, GPOM-PFL resulted in poses that more closely matched the ground truth along all trajectory. [] Fig. 4. (a–b) OGM and GPOM generated from Rawseeds’ Biccoca dataset. (c–d) Localization results of OGM-PFL and GPOM-PFL using the bicocca dataset. 6 Conclusion and Future Works We proposed the combination of GPOM with PFL to improve localization in situations where the data is sparse or when there are occlusions. The advantage of GPOM is the possibility to predict the occupancy at any position in space from a set of sparse measurements. To make GPOM work together with PFL, we modeled a likelihood function based on multivariate normal probability density function that uses the occupancy mean and variance, and the range information of each measurement. With this, all the information retrieved from the map can be combined into a single model by considering dependence between all data points. We run experiments in two datasets: one obtained from a simulated environment; and three obtained from real world environments. GPOM-PFL results were compared with the standard version of PFL. In all experiments the GPOM-PFL obtained ATE and orientation errors at least 65 % more accurate than conventional PFL. From these results we can say that GPOM-PFL can handle noisy sensor data, does not need dense sensor measurements or high-frequency odometer data. It also showed the possibility to provide an accurate pose estimation while traversing areas with less sensor information in the map. GPOM-PFL demonstrated to be a promising localization solution compared to approaches based on discrete maps. Despite the favorable results of GPOM-PFL, experiments in other datasets are still required, such as in dynamic environments and outdoor environments. The GP method is also known to be computationally expensive due to the [$$O(n^3)$$] cost for inverting a matrix. Solutions for reducing the computational time must be explored, and understanding the trade-off between speed and accuracy of localization is an interesting venue for future work. Acknowledgments The authors acknowledge the grant provided by FAPESP (2012/02354-1; 2014/09096-3), the ACFR and LRM groups for their support. References 1. Brooks, A., Makarenko, A., Upcroft, B.: Gaussian process models for indoor and outdoor sensor-centric robot localization. IEEE Trans. Robot. 24(6), 1341–1351 (2008)CrossRef 2. Ferris, B., Fox, D., Lawrence, N.: WiFi-SLAM using gaussian process latent variable models. In: Proceedings of the 20th International Joint Conference on Artifical Intelligence, IJCAI 2007, San Francisco, CA, USA, pp. 2480–2485 (2007) 3. Kim, S., Kim, J.: Building occupancy maps with a mixture of Gaussian processes. In: 2012 IEEE International Conference on Robotics and Automation (ICRA), pp. 4756–4761, May 2012 4. Ko, J., Fox, D.: Gp-bayesfilters: Bayesian filtering using gaussian process prediction and observation models. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2008, pp. 3471–3476, September 2008 5. Kretzschmar, H., Stachniss, C.: Information-theoretic compression of pose graphs for laser-based slam. Int. J. Robot. Res. 31(11), 1219–1230 (2012). doi:10.​1177/​0278364912455072​. http://​ijr.​sagepub.​com/​content/​31/​11/​1219 CrossRef 6. O’Callaghan, S.T., Ramos, F.T.: Gaussian process occupancy maps. I. J. Robot. Res. 31(1), 42–62 (2012)CrossRef 7. Plagemann, C., Kersting, K., Pfaff, P., Burgard, W.: Gaussian beam processes: a nonparametric bayesian measurement model for range finders. In: Proceedings of Robotics: Science and Systems, RSS (2007) 8. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press (2005). ISBN:026218253X 9. Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark forthe evaluation of RGB-D SLAM systems. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 573–580, October 2012 10. Yang, S.W., Wang, C.C.: Feasibility grids for localization and mapping in crowded urban scenes. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 2322–2328, May 2011 Footnotes 1 http://​www2.​informatik.​uni-freiburg.​de/​~stachnis/​datasets/​.   2 http://​www.​rawseeds.​org/​home/​category/​benchmarking-toolkit/​datasets/​.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_65 Experimental Methods for Mobility and Surface Operations of Microgravity Robots Benjamin Hockman¹  , Robert G. Reid²  , Issa A. D. Nesnas²   and Marco Pavone¹   (1) Department of Aeronautics and Astronautics, Stanford University, Stanford, California 94305, USA (2) Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109, USA     Benjamin Hockman (Corresponding author) Email: bhockman@stanford.edu   Robert G. Reid Email: rgreid@jpl.nasa.gov   Issa A. D. Nesnas Email: nesnas@jpl.nasa.gov   Marco Pavone Email: pavone@stanford.edu Abstract We propose an experimental method for studying mobility and surface operations of microgravity robots on zero-gravity parabolic flights—a test bed traditionally used for experiments requiring strictly zero gravity. By strategically exploiting turbulence-induced “gravity fluctuations,” our technique enables a new experimental approach for testing surface interactions of robotic systems in micro- to milli-gravity environments. This strategy is used to evaluate the performance of internally-actuated hopping rovers designed for controlled surface mobility on small Solar System bodies. In experiments, these rovers demonstrated a range of maneuvers on various surfaces, including both rigid and granular. Results are compared with analytical predictions and numerical simulations, yielding new insights into the dynamics and control of hopping rovers. This research was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. It was funded by the NASA Innovative Advanced Concepts and Flight Opportunities programs. Government sponsorship acknowledged. The authors wish thank J. Castillo-Rogez (JPL), A. Frick (JPL), J. Hoffman (MIT), E. Carey (JPL), D. Delrosso (JSC), and R. Roe (JSC) for their insightful discussions. 1 Introduction Small Solar System bodies, such as comets, asteroids, and irregular moons, have become high-priority targets for planetary exploration [1, 2]. Remote observations have suggested that many small bodies are topographically diverse both in composition and structure, requiring targeted measurements at multiple locations to characterize [2]. Accordingly, controlled surface mobility on small bodies was recently identified by the National Research Council as a high priority for NASA’s technology development [3]. Controlled mobility in microgravity is challenging due to the almost complete lack of traction. Traditional wheeled vehicles, which rely on their weight to grip the surface, are restricted to extremely low speeds in microgravity and are highly susceptible to losing surface contact and flipping over when traversing uneven terrain. Several mobility techniques have been proposed for maneuvering in the microgravity environments found at the surface of small bodies. Specifically, hopping has been recognized by agencies such as NASA [4, 5], ESA [6], RKA [7], and JAXA [8], as having many advantages over techniques such as wheeled and legged systems. In fact, two hoppers are currently en route to Asteroid 162173 Ryugu aboard JAXA’s Hayabusa 2 spacecraft: a MASCOT lander developed by DLR [6] and three MINERVA landers [8], which are both equipped with momentum devices that allow them to hop, albeit with minimal control. 1.1 Hedgehog Hopping Rover This paper considers, as a case study, a hopping rover developed by the authors called “Hedgehog,” which utilizes internal actuation (via three mutually-orthogonal flywheels) to generate controlled directional hops in microgravity (see Fig. 1, left). Specifically, by applying an internal torque on the flywheels via motors and mechanical brakes, the chassis rotates and induces external reaction forces on the surface, producing ballistic hops (see Fig. 1, right). This mobility technique, as investigated in [9–11], offers a simple, yet uniquely capable, architecture for targeted mobility on small bodies (see overview video at: http://​youtu.​be/​bDmoqjNQAu8). Specifically, [11] derives flywheel control laws for a variety of “motion primitives” (e.g., hopping, tumbling, and twisting) that have demonstrated a previously unobtained level of precision in simulations and ground-based experiments. [] Fig. 1. Left: Hopping rover prototype shown without avionics, covers, or solar panels. The cubic chassis encloses three orthogonal flywheels and is surrounded by eight compliant spikes on its corners. Right: By accelerating internal flywheels, surface reaction forces cause the rover to tumble or hop. 1.2 Experiments in Microgravity One of the most challenging tasks when developing robotic systems for microgravity is testing in relevant environments. Here, and throughout this paper, “microgravity” refers to the small, but importantly, non-zero gravity (roughly [$$10^{-5}$$] to [$$10^{-2}$$] g’s) exerted by a small body. At these scales, surface reaction forces are small and motion is slow, so it is not practical or comparable to test microgravity systems in 1 g environments (as can be done, for example, with Martian rovers). This is an issue for mobility systems as well as other surface operations such as excavation or anchoring devices. Instead, there have been various methods proposed for emulating reduced gravity on Earth, which can be roughly divided into two classes: (1) free-fall test beds, such as drop towers and parabolic flights, and (2) gravity-offloading test beds that aim to “counteract” the force of gravity. Various gravity-offloading approaches have been demonstrated, including buoyancy tanks, air-bearing tables [9], passive counterweight mechanisms [9, 10], and actively controlled tracking systems [11–14]. Gravity-offloading test beds generally allow for longer duration and less expensive tests than free fall chambers, but they typically introduce undesirable exogenous dynamics and/or restrict the system’s range of motion. For the Hedgehog rover presented in Sect. 1.1, Hockman et al. developed a first-of-a-kind test bed at Stanford University, uniquely capable of tracking the Hedgehog’s motion in 6 degrees of freedom (DoF) under dynamic force inputs [11]. It consists of an actively-controlled overhead 3-axis gantry crane that tracks the translational motion of the Hedgehog at an effective 0.0005 - 0.005 g’s, and a passive gimbal that allows the Hedgehog to freely rotate about all three axes (see Fig. 2). While this test bed has demonstrated effective tracking performance for some types of maneuvers such as small hops and tumbles, it has three inherent limitations: (1) it cannot track fast maneuvers such as more aggressive hops, (2) the added mass and inertia of the gimbal prevent accurate tracking of rotations about non-symmetric axes, and (3) it cannot offload the surface regolith’s mass, which, especially for loose granular materials, can behave quite differently in microgravity. [] Fig. 2. The Stanford 6 DoF microgravity test. The powered gantry tracks the translational motion of the Hedgehog, while allowing for free fall at sub-milli-g levels. The gimbal frame allows the Hedgehog to rotate in all three axes. Statement of Contributions : The contributions of this paper are twofold: first, we propose a novel experimental method that utilizes zero-g parabolic flights—a test bed traditionally used for experiments requiring strictly zero gravity — for testing microgravity surface operations (Sect. 2). Our approach exploits the “gravity fluctuations” induced by turbulence on the aircraft to trigger experiments during windows of acceptable conditions. The proposed technique avoids many of the limitations observed in gravity-offloading test beds, and thus offers a complementary approach to testing robotic systems, such as Hedgehog, that are designed for microgravity environments. Second, we use this experimental procedure to evaluate the controllability of two Hedgehog prototypes performing various maneuvers on several rigid and granular surfaces. The results largely agree with predictions based on analytical and numerical models (Sect. 3). To the best of the authors’ knowledge, these experiments constitute some of the first demonstrations of controlled hopping on a zero gravity aircraft. 2 Parabolic Flight Experiments Parabolic flights offer a unique environment to conduct experiments in effectively reduced gravity, but they pose significant challenges for systems that require smooth and stable accelerations. During each parabola, disturbances caused by turbulence and control errors induce “gravity fluctuations” on the order of [$$\pm 0.03$$] g’s. Figure 3 shows a representative example of time-series acceleration data collected on NASA’s C9 aircraft. Brief periods of negative g’s are particularly problematic for unrestrained robots that need to remain in contact with a surface (such as our Hedgehog), since they will inadvertently float away. One solution is to “positively bias” the effective gravity such that it never goes negative. Typically, this can only be afforded for a small fraction of parabolas since multiple experimental payloads are flown on each flight, and most require net-zero gravity. During zero-g parabolas, however, gravity fluctuations often produce brief periods with slightly positive gravity conditions that can be utilized for microgravity experiments. With this in mind, we propose an experimental method that systematically exploits these fluctuations to enable surface-interaction experiments with microgravity robotic systems. [] Fig. 3. Example time-series acceleration data from C9 reduced gravity aircraft during a parabola. Top: Accelerations of the aircraft in the aircraft reference frame. Bottom: Effective slope of the experimental payload; 0[$$^\circ $$] is horizontal, while 90[$$^\circ $$] is sideways. Our experimental setup is as follows (see Fig. 4): the Hedgehog prototype sits on the test surface and is restrained by a retractable arm that applies a gentle downwards force. An accelerometer, rigidly mounted to the floor of the aircraft, measures the transient accelerations and is used to automatically retract the arm and initiate each experiment when the resulting gravity conditions are deemed acceptable. An array of small cameras fixed inside the payload container track the Hedgehog’s motion (position and attitude) with millimeter precision and high frame rates (240 Hz) via body-mounted fiducial markers. Some cameras were also focused on the surface to observe contact interactions. [] Fig. 4. Experimental setup. Left: The Hedgehog (A) is held in place on the test surface (C) by an actuated arm (B). An array of five cameras (D) capture its motion as it hops within the container. Right: Photo of our experiments on NASA’s C9 aircraft. “Acceptable” gravity conditions for triggering an experiment should be tailored for the particular system being tested. Since Hedgehog actuates while in contact with the surface, for example, we defined the triggering condition as the first point at which (1) the total acceleration magnitude is less than 0.02 g’s (with a low-pass filter) and (2) the effective slope is less than [$$30^\circ $$] relative to the surface normal (to avoid sliding or tipping before actuation). The shaded region in Fig. 3 shows the time at which an experiment was triggered for that particular parabola. Historical acceleration data for the particular aircraft should be analyzed to validate the desired triggering condition and to predict the percentage of parabolas that would allow successful triggering. Thus, for experiments with flexible gravity requirements, there is an inherent trade between the acceptable parabola quality and probability of a trigger occurring. To emulate a range of surface properties that rovers may encounter on small bodies, the experimental payload container in Fig. 4 was fitted with two boxes (labeled “C”) that provided a total of four different test surfaces: a low-friction Kapton tape covering the (rigid) lid of one box, a high-friction grip tape on the other lid, a cohesive comet regolith simulant [15], and a low-cohesion garnet sand (see Fig. 5). The two rigid surfaces, “A” and “B,” aim to mimic rocky or icy surfaces with varying degrees of “traction.” The simulant “C,” is a crushable material consisting of an aerated cement that mimics cohesive (yet friable) regolith. Its cohesive properties allowed it to be exposed during negative g’s, unlike the garnet sand, “D.” [] Fig. 5. Test surfaces: (A) low-friction and (B) high-friction rigid surfaces, (C) crushable comet regolith simulant, and (D) granular, low-cohesion garnet sand. 3 Mobility Experiments The experimental techniques discussed in Sect. 2 were used to evaluate the controllability of two Hedgehog prototypes performing maneuvers on various surfaces. Over the course of four flights, 74 of 190 parabolas resulted in successfully triggered mobility experiments. Of those, 64 were performed on three different surfaces in zero-g parabolas (0 [$$\pm ~0.02$$] g), while 10 were performed on the garnet sand in positively-biased parabolas (0.03 [$$\pm ~0.02$$] g). The remaining parabolas experienced unfavorable gravity conditions or technical difficulties (timing error of the arm release and hop trigger, operator error, software bugs, and wireless interference from aircraft communication). Most parabolas were utilized for hopping experiments, but a few were also used to test more precise maneuvers such as tumbling and twisting. A video compilation of several maneuvers can be found at http://​web.​stanford.​edu/​~pavone/​iser16. 3.1 Predictive Modeling and Analysis Several numerical and analytical models have been designed to study the dynamics of this mobility platform and to derive control laws for executing deliberate maneuvers (see [11] for details). By simplifying the rover’s geometry, and assuming instantaneous momentum transfer with no slipping during contact, the two models in Fig. 6A and B allow control laws to be derived analytically from rigid body dynamics and angular momentum arguments. These control laws depend on the rover’s geometric and inertial properties, as well as its resting pose on the surface. For hopping with high-torque brakes, these control laws map a desired takeoff velocity vector to the prerequisite angular speeds of the flywheels. [] Fig. 6. Dynamic Models. A: 2D model used to derive control laws for single-axis hops [11]. B: Hedgehog is modeled as a cube pivoting on one of its corners, which is used to derive control laws for directional hops [11]. C: A numerical contact model assumes Coulomb friction and arbitrary penetrating force function. To study more realistic dynamics, a penetrating contact model was designed to allow slip and surface deformation that is numerically integrated to solve for the Hedgehog’s trajectory (see Fig. 6C). By varying the friction coefficients ([$$\mu \_s, \mu \_k$$]) and the penetration force function (F) this “elastic sliding block” model can approximate a wide variety of surface properties. For most rigid (or near-rigid) surfaces, a damped elastic model works well (i.e. [$$F = kl + b\dot{l}$$]), but more complex nonlinear models can also be devised to capture surface deformation effects. While a numerical approach does not yield analytical control insights, it is useful for understanding motion on irregular surfaces and the response in subsequent surface collisions. These models can now serve as a basis for comparison with data collected on microgravity experiments. 3.2 Hopping Experiments Since the dynamics of a hopping rover in ballistic flight are deterministic (for an airless body with known spin and gravity model), we can characterize the resulting trajectory with three parameters describing its initial launch velocity vector: speed ([$$v\_\text {h}$$]), elevation angle ([$$\theta \_\text {h}$$]), and azimuth angle ([$$\phi \_\text {h}$$]). These parameters are extracted from the visual tracking data by fitting a parabola to the time-series position measurements of the Hedgehog’s mass center for the first 20 cm of its trajectory after takeoff. The observed hop vectors can then be compared with predictions obtained by inputting the observed flywheel speeds into our models. [] Fig. 7. Experimental trajectory data (speed, elevation, and azimuth) for 65 hopping maneuvers on 5 surfaces (see Fig. 5) compared with predictions based on an analytical model (left plots) and a numerical model (right plots). “Rough simulant” corresponds to a few experiments in which the comet regolith simulant was highly fractured and uneven. Predictions and observations that are in agreement lie along the black lines with slope 1. The table of mean absolute errors summarizes these results. The results in Fig. 7 show predicted values on the horizontal axis and measured trajectory data on the vertical axis. Each data point represents a trajectory resulting from a particular set of flywheel speeds. The analytical model used for comparison in the left plots is shown in Fig. 6B, and the numerical model for the right plots, in Fig. 6C. Overall, there was strong agreement between the experimental and model-generated data with mean absolute errors of about [$$10\%$$] for speed, and [$$5^\circ $$] for elevation and azimuth angles. It is important to note that Hedgehog experienced slight drift before actuation on many of these maneuvers, such that its initial state was not exactly grounded and stationary, as assumed by the analytical model. Therefore, it is not surprising that the numerical model, which is simulated from the actual measured initial states and accounts for the varying surface properties, exhibits stronger agreement with the data than the less-informed analytical model. Examining systematic bias in the data can help to identify unmodeled effects and make improvements. For example, the clustering of analytical elevation predictions at 45[$$^\circ $$] reflects the no-slip and instantaneous momentum transfer assumptions for single-axis hops, which are not realizable for lower friction surfaces and brakes with limited torque. If, however, information about the surface friction is known a priori, the control law can be adjusted to reflect the higher expected elevation ([$$\theta \_\text {h} \approx \cot ^{-1}\mu $$]). Also, contact interactions with loose granular regolith, which is essentially “fluidized” by microgravity, does not adhere well to either the pin-jointed spike contact assumption or the numerical contact model (such as overestimated hop elevation on sand in Fig. 7); it will be the subject of future work. Finally, it is suspected that the high-speed outliers can be attributed to a temporary hardware issue with one of the prototype’s braking mechanisms. 3.3 Tumbling Experiments Tumbling is simply a less energetic form of hopping, whereby the Hedgehog rotates about a pair of spikes without losing ground contact, nominally rotating [$$90^\circ $$] and translating one body length. For this single-flywheel maneuver, an upper and lower bound on the control input are derived in [11], which correspond to the speed at which the Hedgehog would rotate too fast and lose surface contact and the speed at which it would just barely tip over, respectively. [$$\begin{aligned} \omega \_{\text {max}} = \sqrt{\frac{g \cos \beta }{\eta ^2 l \cos \alpha }}, \qquad \omega \_{\text {min}} = \sqrt{\frac{2 m\_{\text {p}} g l (1 - \cos (\alpha + \beta ))}{\eta I\_{\text {f}}}}. \end{aligned}$$] (1) These bounds are functions of the Hedgehog’s inertial and geometric properties ([$$m\_{\text {p}}, I\_{\text {f}}, \eta , \alpha , l$$]), its initial pose ([$$\beta $$]), and gravity (g) (see [11] for details). However, due to the need to maintain continuous ground contact over a longer time period, tumbling maneuvers could not exploit brief gravity transients and were therefore restricted to positively-biased parabolas. Table 1 summarizes data for the two tumbles performed. Table 1. Data from two successful tumbling experiments on sand at about 0.035 g’s. Note that the measured flywheel speed ([$$\omega $$]) is indeed between the predicted minimum and maximum bounds (see Eq. 1). \*Negative inclination indicates a “downhill” tumble. +-------+---------+--------------------+------------------------------------+------------------------------------+----------------------+----------+ | Trial | Surface | Inclination\* |  [$$\omega \_{\text {min}}$$] (rpm) |  [$$\omega \_{\text {max}}$$] (rpm) |  [$$\omega $$] (rpm) | Success? | +:======+:========+:===================+:===================================+:===================================+:=====================+:=========+ | 1 | sand | [$$-10.7^\circ $$] | 1451 | 2937 | 1968 | Yes | +-------+---------+--------------------+------------------------------------+------------------------------------+----------------------+----------+ | 2 | sand | [$$-22.7^\circ $$] | 255 | 837 | 274 | Yes | +-------+---------+--------------------+------------------------------------+------------------------------------+----------------------+----------+ While the data is sparse, a few insightful observations were made. For one, on loose granular media, the leading spikes tend to sink into the surface, which shifts the pivoting axis inward and effectively shortens the modeled spike length (l). Also, faster tumbles have a higher chance of producing undesirable rebounds upon impact. However, both of these incidental effects can be mitigated by operating in the lower speed range (e.g. 10% higher than [$$\omega \_{\text {min}}$$]). 4 Main Experimental Insights Despite the negative gravity fluctuations, accelerometer data indicates that approximately 40% of parabolas in NASA’s C-9 aircraft yield acceptable conditions for brief microgravity mobility experiments, proving that parabolic flights can be a viable test bed for microgravity robotic systems. Moreover, for other short-duration microgravity experiments that can be executed in quick succession, some parabolas may offer multiple opportunities to collect data. This was not possible with our Hedgehog prototypes, as they require time to accelerate the flywheels. An intuitive way of understanding the hopping uncertainty is by considering the transfer of angular momentum from the flywheel to the Hedgehog, which is assumed to be conserved about the pivoting spike(s) in the control analysis. Indeed, among the hops that did not experience initial drift, the momentum saw a mean loss of only 7%. Importantly, however, this angular momentum can be decomposed into the sum of linear ([$$\varvec{r} \times \varvec{p}$$]) and rotational ([$$\varvec{\varvec{I}} \cdot \varvec{\omega }$$]) components, which can be thought of as the “speed” and “spin” of the Hedgehog. Since we are primarily concerned with the translational trajectory, it is important to understand what portion of the flywheel momentum is converted to linear motion of the mass center. In theory, for the pin-jointed contact model in Fig. 6A, linear and rotational momentum should be in fixed proportions, [$$ml^2\propto I$$], respectively, where m is the mass, l is the spike length, and I is the centroidal inertia. However, there are certain conditions for which these proportions can be distorted. Contact elasticity, for example, can induce a recoil effect as the spikes push against the surface, which reduces the forward spin and increases the speed of the hop. In extreme cases, this may even induce a counter-rotation, and thus, much faster hops. Although elasticity generally yields more efficient hops, it is also less predictable, suggesting that more damping/shock-absorbing spikes may be favorable. Surface slip, on the other hand, has the opposite effect: on low-friction surfaces, the planted spikes tend to slip, or “sweep” under the hopper and incur faster spinning, yet slower hops (which also increases the hop elevation to [$$\theta \_\text {h} \approx \cot ^{-1}\mu $$]). That said, a smooth Coulomb friction model is likely a gross oversimplification for the deformable and irregular surfaces likely to be found on small bodies. For example, the comet regolith simulant described in Sect. 2 is smooth to the touch but often crushed under the pressure of the spikes during a hop, creating a secure foothold that prevents slip. Thus, future work will consider alternative spike designs that include small features to penetrate and grip the surface. In addition to hopping and tumbling, twisting maneuvers were also tested whereby the Hedgehog spins about its vertical axis. This can be leveraged in a controlled way to rotate by some small angle, as discussed in detail in [11]. This was only tested once on sand in 0.03 g’s, which produced a small angular shift as expected. While not directly useful for controlled mobility, aggressive twisting maneuvers (e.g. twists that result in more than one full revolution) could be utilized to energetically escape when embedded in loose regolith. One such maneuver was executed while the Hedgehog was partially embedded in the garnet sand; it ejected all sand within its swept radius and the Hedgehog was launched vertically. The various ways in which surface properties can affect mobility performance raises an interesting question for further research: the inverse problem—that is—given some known control input and corresponding force on the surface, how can information about physical properties of the surface be extracted from the dynamic response. Even constraining bulk properties, such as regolith density, depth, and cohesion, would provide useful information for mission designers and planetary scientists alike. In the broader context of motion planning and navigation on small bodies, the ultimate goal is to reach designated targets, and the controllability of hopping, demonstrated in this paper, is simply one factor that enables this. The dynamics of subsequent bouncing and the physical and topographical properties of the environment also play a critical role. Thus, it is perhaps more important to characterize the uncertainty of a hop than it is to further refine its accuracy with more complex control regimes. The experiments enabled by our method for strategic microgravity testing—and the improved models they inspire—offer unique insight towards achieving this goal. References 1. “Decadal Survey Vision and Voyages for Planetary Science in the Decade 2013–2022,” National Research Council, Technical report (2011) 2. Castillo Rogez, J.C., Pavone, M., Nesnas, I.A.D., Hoffman, J.A.: Expected science return of spatially-extended in-situ exploration at small solar system bodies. In: IEEE Aerospace Conference, Big Sky, MT, pp. 1–15, March 2012 3. “NASA Space Technology Roadmaps, Priorities: Restoring NASA’s Technological Edge and Paving the Way for a New Era in Space”, National Research Council, Technical report (2012) 4. Jones, R.: The MUSES-CN rover and asteroid exploration mission. In: 22nd International Symposium on Space Technology and Science, pp. 2403–2410 (2000) 5. Fiorini, P., Burdick, J.: The development of hopping capabilities for small robots. Auton. robots 14(2), 239–254 (2003)CrossRefMATH 6. Dietze, C., Herrmann, S., Kuß, F., Lange, C., Scharringhausen, M., Witte, L., van Zoest, T., Yano, H.: Landing and mobility concept for the small asteroid lander MASCOT on asteroid 1999 JU3. In: International Astronautical Congress (2010) 7. Sagdeev, R.Z., Zakharov, A.V.: Brief history of the phobos mission. Nature 341(6243), 581–585 (1989)CrossRef 8. “JAXA Hayabusa mission,” JAXA, Technical report (2011). http://​hayabusa.​jaxa.​jp/​e/​index.​html 9. Allen, R., Pavone, M., McQuin, C., Nesnas, I.A.D., Castillo Rogez, J.C., Nguyen, T.-N., Hoffman, J.A.: Internally-actuated rovers for all-access surface mobility: theory and experimentation. In: Proceedings of IEEE Conference on Robotics and Automation, Karlsruhe, Germany, pp. 5481–5488, May 2013 10. Reid, R.G., Roveda, L., Nesnas, I.A.D., Pavone, M.: Contact dynamics of internally-actuated platforms for the exploration of small solar system bodies. In: i-SAIRAS, Montréal, Canada, pp. 1–9, June 2014 11. Hockman, B., Frick, A., Nesnas, I.A.D., Pavone, M.: Design, control, and experimentation of internally-actuated rovers for the exploration of low-gravity planetary bodies. In: Wettergreen, D.S., Barfoot, T.D. (eds.) Field and Service Robotics. Springer Tracts in Advanced Robotics, vol. 113, pp. 283–298. Springer, Heidelberg (2016)CrossRef 12. Chacin, M., Yoshida, K.: A microgravity emulation testbed for asteroid exploration robots. In: Proceedings of i-SAIRAS (2008) 13. Wilcox, B.H.: ATHLETE: a limbed vehicle for solar system exploration. In: 2012 IEEE of Aerospace Conference, pp. 1–9. IEEE (2012) 14. Valle, P., Dungan, L., Cunningham, T., Lieberman, A., Poncia, D.: Active Response Gravity Offload System (2011) 15. Carey, E.M., Peters, G.H., Chu, L., Zhou, Y.M., Cohen, B., Panossian, L., Choukroun, M., Green, J.R., Backes, P., Moreland, S., Shiraishi, L.R.: Development and characteristics of mechanical porous ambient comet simulants as comet surface analogs. In: Lunar and Planetary Science Conference (2016) © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_66 Multi-Sensor SLAM with Online Self-Calibration and Change Detection Fernando Nobre¹  , Christoffer R. Heckman¹   and Gabe T. Sibley¹   (1) Department of Computer Science, University of Colorado, Boulder, Colorado 80309, USA     Fernando Nobre Email: fernando.nobre@colorado.edu   Christoffer R. Heckman (Corresponding author) Email: christoffer.heckman@colorado.edu   Gabe T. Sibley Email: gsibley@colorado.edu Abstract We present a solution for constant-time self-calibration and change detection of multiple sensor intrinsic and extrinsic calibration parameters without any prior knowledge of the initial system state or the need of a calibration target or special initialization sequence. This system is capable of continuously self-calibrating multiple sensors in an online setting, while seamlessly solving the online SLAM problem in real-time. We focus on the camera-IMU extrinsic calibration, essential for accurate long-term vision-aided inertial navigation. An initialization strategy and method for continuously estimating and detecting changes to the maximum likelihood camera-IMU transform are presented. A conditioning approach is used, avoiding problems associated with early linearization. Experimental data is presented to evaluate the proposed system and compare it with artifact-based offline calibration developed by our group. Keywords Self-calibrationSLAMConstant-timeChange detection 1 Introduction Autonomous platforms equipped with visual and inertial sensors have become increasingly ubiquitous. Generally these platforms must undergo sophisticated calibration routines to estimate extrinsic and intrinsic parameters to high degrees of certainty before sensor data may be interpreted and fused. Even once fielded, these platforms may experience changes in these parameters. Self-calibration addresses this by inferring intrinsic and/or extrinsic parameters pertaining to proprioceptive and exteroceptive sensors without using a known calibration mechanism or a specific calibration routine. The motivation behind self-calibration is to remove the explicit, tedious, and sometimes nearly impossible calibration procedure from robotic applications such as localization and mapping. By continuously estimating calibration parameters, no prior knowledge of calibration procedures is required. Furthermore, with the addition of statistical change detection on calibration parameters, long-term autonomy applications are greatly robustified. [] Fig. 1. Example pose graph. Poses being estimated (blue) are conditioned on past poses (red) and landmark positions (stars). Both the fixed sliding window and the adaptive window are conditioned on previous poses. The candidate window is not conditioned since it does not make the assumption that previous poses are correctly estimated. Most current techniques for vision-aided inertial navigation use filtering approaches [1–3] or a smoothing formulation. In either case the estimation is made constant-time by rolling past information into a prior distribution. Filtering methods present the significant drawback of introducing inconsistencies due to linearization errors of past measurements which cannot be corrected post hoc, particularly troublesome for non-linear camera models. Some recent work has tackled these inconsistencies; see, e.g. [4–7]. The state-of-the-art includes methods to estimate poses and landmarks along with calibration parameters, but these approaches do not output the marginals for the calibration parameters, which are desirable for long-term autonomy applications. To address these considerations, we propose a method that avoids using any prior distribution; instead, a conditioning approach is used [8], coupled with selecting only highly informative segments of the trajectory [9]. The method discards segments capturing degenerate motions which provide little to no information for both camera intrinsic and camera-IMU extrinsic [1, 2] parameters. However, unlike the intrinsic parameters of a linear camera model [10], the convergence basin for the six degree of freedom camera-IMU transform is found to be very narrow. An initialization procedure similar to [11, 12] is employed to initialize the camera-IMU transform, which is then used in a maximum-likelihood estimator. The use of a maximum-likelihood formulation is especially useful as it provides the covariance matrix for the estimated parameters, which makes it possible to establish a fitness score for each segment of the trajectory. We also propose an extension to the framework presented in [9], allowing for multiple sensors to be self-calibrated in an online setting, leveraging [1, 2] to disambiguate unobservable degrees of freedom. Note that while the global position of the IMU and the rotation axis about gravity are not observable, the following quantities are generally observable: (1) IMU roll and pitch with respect to the horizontal plane; (2) IMU position, orientation and velocity with respect to the initial IMU position; (3) feature position with respect to the initial IMU position; and (4) IMU-to-camera transformation. Finally, we introduce per-sensor candidate trajectory segments, which we find to be necessary to properly estimate each sensors’ relevant parameters online. 2 Formulation and Methodology A keyframe-based [13] pose-and-landmark non-linear maximum likelihood estimation is performed for real-time map updates. The calibration parameters, including time-varying IMU biases, are also estimated alongside the pose and landmark parameters on the informative segments of the trajectory. The complete state vector is: [$$\begin{aligned} \varvec{X}= \left[ \begin{array}{ccc} \left\{ \begin{array}{cccc} \varvec{x}\_{wp\_n}&\varvec{v}\_{w\_n}&\varvec{b}\_{g\_n}&\varvec{b}\_{a\_n}\end{array}\right\}&\left\{ \rho \_k\right\}&\left\{ \varvec{x}\_{c}\right\} \end{array}\right] ^{T}, \end{aligned}$$] (1) where [$$\varvec{x}\_{wp\_n} \in $$] SE(3) is the transformation from the coordinates of the [$$n^\text {th}$$] keyframe to world coordinates, [$$\varvec{v}\_{w\_n} \in \mathbb {R}^{3}$$] is the velocity vector of the [$$n^\text {th}$$] keyframe in world coordinates, and [$$\varvec{b}\_{g\_n}\in \mathbb {R}^{3}$$] and [$$\varvec{b}\_{a\_n}\in \mathbb {R}^{3}$$] are the gyroscope and accelerometer bias parameters for the [$$n^\text {th}$$] keyframe respectively. [$$\left\{ \rho \_{k}\right\} $$] is the 1-D inverse-depth [14] parameter for the [$$k^{\text {th}}$$] landmark and [$$\left\{ \varvec{x}\_{c}\right\} $$] are the calibration parameters. Note that [$$\varvec{x}\_{wp\_n}$$] has 6 degrees of freedom: 3 for translation, and 3 for rotation. To avoid singularities arising from a minimal representation (e.g. using Euler angles), the rotation component of the transformation is represented as a quaternion, with the optimization lifted to the tangent space (at the identity) of the SO(3) manifold. Measurements are formed by tracking image keypoints across frames. A landmark parameterized by inverse depth is projected onto an image forming a projected pixel coordinate [$$\varvec{p}\_{proj}$$] which is formulated via a transfer function [$$\varvec{T}$$] as follows: [$$\begin{aligned} \nonumber \varvec{p}\_{proj}&= \textit{T}\left( \varvec{p}\_{r}, \varvec{T}\_{wp\_m}, \varvec{T}\_{wp\_r} \varvec{T}\_{pc},\rho \right) \\&=\mathscr {P}\left( \varvec{T}\_{pc}^{-1}\varvec{T}\_{wp\_{m}}^{-1}\varvec{T}\_{wp\_{r}}\varvec{T}\_{pc}\mathscr {P}^{-1}\left( \varvec{p}\_{r},\rho \right) \right) . \end{aligned}$$] (2) where [$$\rho $$] is the inverse depth of the landmark, [$$\varvec{T}\_{wp\_{r}}$$] is the transformation from the coordinates of the reference keyframe (in which the landmark was first seen and initialized) to world coordinates, [$$\varvec{T}\_{wp\_{m}}$$] is the transformation from the measurement keyframe to world coordinates, [$$\varvec{p}\_{r}$$] is the 2D image location where the original feature was initialized in the reference keyframe, [$$\varvec{p}\_{m}$$] is the measured 2D image location in the measurement keyframe, [$$\varvec{T}\_{pc}$$] is the transformation from the camera to the keyframe coordinates, [$$\mathscr {P}^{-1}$$] is the 2D to 3D back-projection function and [$$\mathscr {P}$$] is the 3D to 2D camera projection function which returns the predicted 2D image coordinates. The camera-to-keyframe transformation [$$\varvec{T}\_{pc}$$] is non-identity as the keyframe is collocated on the inertial frame (the frame in which inertial measurements are made), to simplify the inertial integration. [$$\varvec{T}\_{pc}$$] is the calibration parameter we have interest in estimating. The usual approach is to assume Gaussian noise and minimize a nonlinear least squares problem with the following residual function: [$$\begin{aligned} r\_{\mathscr {V}\_{m,k}} =&\Vert \varvec{e}\_{\mathscr {V}\_{m,k}}\Vert \_{\varSigma \_{\varvec{p}\_{m,k}}}^{2} =\Vert \varvec{p}\_{m,k}- \varvec{p}\_{proj}\Vert \_{\varSigma \_{\varvec{p}\_{m,k}}}^{2}. \end{aligned}$$] (3) where [$$\varvec{p}\_{m,k}$$] is the measured 2D image location of the [$$k^\text {th}$$] landmark in the [$$m^\text {th}$$] keyframe with covariance [$$\varSigma \_{\varvec{p}\_{m,k}}$$]. 2.1 Initialization As shown in [11, 12], having a good initial estimate can mean the difference between fast convergence and complete divergence. As such, we leverage the work from [1, 2, 11] which shows that with a minimum of three frames and five tracked features, it is possible to obtain the camera-to-IMU rotation. This initial rotation estimate can then be used to solve a linear system for an initial guess at the translation estimate. We consider the scenario where enough (five or more) features are observed across at least three frames. The tracked features can be used to obtain the relative rotation between two camera frames i, j: [$$^{C}R\_{ij}$$] and integrating the IMU measurements to obtain the relative rotation: [$$^{B}R\_{ij}$$], where C represents the camera frame and B the body frame, which is defined without loss of generality as the IMU frame. The following equation relates the camera rotation to the body rotation: [$$\begin{aligned} ^{C}R\_{ij} = {^{C}\_{B}R} {^{B}R\_{ij}} {^{B}\_{C}R} \Rightarrow ^{C}R\_{ij}{^{C}\_{B}R} = {^{C}\_{B}R}{^{B}R\_{ij}}, \end{aligned}$$] (4) where [$${^{C}\_{B}R}$$] is the rotation of the body frame in the camera frame. In order to obtain [$${^{C}\_{B}R}$$] we employ an error-state formulation to minimize a robustified over-constrained least squares problem. In our experience we find that collecting more than 3 frames yielded more reliable estimates; therefore, we use 20 frames for the initial rotation estimate. Once the estimate on [$${^{C}\_{B}R}$$] has converged, translation can be obtained by employing the method described in [11] by solving a linear system derived from transferring the 3D position of a landmark from the camera to the body frame. 2.2 Constant Time Self-Calibration The constant time self-calibrating framework is briefly summarized here; for more details, refer to [10]. Due to the limited observability and high connectivity of calibration parameters in the SLAM graph, it is impractical to estimate these parameters in real-time applications using conventional filtering or smoothing approaches [3, 7, 15, 16]. Instead every segment of m frames in the trajectory is analyzed, and the n most informative segments are added to a priority queue, where m and n are tuning parameters dependent on the calibration parameters being estimated. In order to assess the informativeness of a segment, a score is computed based on the marginals of the calibration parameters estimated by a particular candidate segment. If the candidate segment outperforms the worst-scoring window in the priority queue by a predefined margin, it is swapped in. Every time the priority queue is updated, a batch optimization over poses, landmarks and calibration parameters is run on all the segments in the queue to obtain a new set of calibration parameters. As such, the priority queue represents a rolling estimate of the n most informative segments in the trajectory. For estimating camera intrinsic parameters, such as focal length and principal point, only visual measurements are used in the candidate segment. When the camera-to-IMU transform is estimated, inertial residuals are added to the candidate window estimation. The priority queue optimization’s null space therefore requires careful treatment as it is carried out over several non-continuous segments of the trajectory with varying sensor data. Figure 1 shows the optimization windows over a sample set of poses. Figure 2 shows the proposed architecture for multiple sensors. [] Fig. 2. System architecture with two sensors. For new sensors to be added only the blue boxes need to be provided. Asynchronous Adaptive Conditioning and the Priority Queue boxes each run in their own thread (dotted regions). The main thread is only tasked with the maximum-likelihood estimator and analyzing candidate segments. 2.3 Change Detection The priority queue posterior (with covariance [$$\varSigma ^\prime \_{PQ}$$]) represents the uncertainty over the calibration parameters considering the top k segments in the trajectory. As these segments are usually not temporally consecutive, this distribution encodes the long term belief over the calibration parameters. Conversely, the candidate segment posterior (with covariance [$$\varSigma \_{s}$$]) is calculated based on the most recent measurements and represents an instantaneous belief over the calibration parameters. If there is a sudden change in calibration parameters, for example if the camera is rotated or moved to a different location on the platform, then this will manifest as a difference in the means of the two posterior distributions. This task of comparing the means of two multivariate normal distributions with different covariances is known as the Multivariate Behrens-Fisher problem. When the posterior of the priority queue and the candidate segment is over a set of calibration parameters that represent an SE(3) pose, special attention has to be given to comparing the means of these distributions, particularly with regards to the rotation. A minimal local parameterization is used for the rotation component of the 6 DOF SE(3) pose, so when comparing two posteriors over rotations in the [$$\mathfrak {so}(3)$$] tangent space, one posterior must be transported to the tangent space of the other by means of the Adjoint map, which for SO(3) is: [$$\begin{aligned} {Ad}\_{R} : \mathbb {R}^3 \rightarrow \mathbb {R}^3, \quad Ad\_{R} = R, \end{aligned}$$] (5) which allows moving the matrix exponential from the right-hand side to the left-hand side: [$$\begin{aligned} A\cdot exp(\widehat{x}\,) = exp(\widehat{Ad\_{A} \cdot x})\cdot A, \end{aligned}$$] (6) where if [$$q \in \mathfrak {so}(3)$$] is in minimal 3-vector tangent representation, and [$$M^-\_{3 \times 3}$$] is the space of [$$(3 \times 3)$$] skew-symmetric matrices, then the map [$$\widehat{(\cdot )} : q \rightarrow M^-\_{3 \times 3}$$]. By transporting the tangent space rotation posterior from the candidate segment to the tangent space of the priority queue posterior, the null hypothesis that the means are equal can be tested: [$$\begin{aligned} H\_0 = \mu \_{PQ} = \mu {s} \end{aligned}$$] (7) The F distribution for the null hypothesis is as in [9]. 2.4 Adaptive Asynchronous Conditioning An adaptive asynchronous conditioning [8] solution is employed to avoid the use of a prior distribution on the sliding window SLAM. When conditioning is used instead of marginalization, current active parameters are conditioned on previous parameters, which are assumed to be correct. However since new information may alter the estimate for previous poses, a sliding window pose and landmark estimation is run on a separate thread. This sliding window can adaptively increase its size to alter previous poses based on new measurements. The criteria to increase the window is based on the “tension” of the conditioning residuals, explained as follows. Conditioning residuals are the residual terms connecting an active and inactive pose. For example, a landmark that has a reference frame in an inactive pose, but is seen in an active pose will have a conditioning visual residual. The window is expanded when the current estimate for a parameter falls outside of the expected estimate based on the conditioning residual. Since multiple sensor modalities are used, the Mahalanobis distance of each conditioning residual is thresholded in a [$$\chi ^2$$] test to probabilistically determine when a residual is outside of its expected interval (inducing “tension” in that residual). 3 Experimental Results In order to evaluate the proposed method, experiments were run on two sensor platforms known as “rigs.” Both rigs were equipped with a monocular camera and a commercial grade MEMS-based IMU. Rig A is a smartphone-like mobile device with an integrated global shutter camera with a wide field-of-view lens at [$$640\times 480$$] resolution and a commercial MEMS IMU sampled at 120 Hz. Rig B is a Ximea MQ022CG-CM camera with a wide field-of-view lens at [$$2040 \times 1080$$] resolution downsampled to [$$640 \times 480$$] coupled with a LORD MicroStrain 3DM-GX3 MEMS IMU, sampled at 200 Hz. Cameras on both rigs capture images at 30 frames per second. In all experiments, the AAC system is comprised of a fixed-window estimator with a 10 keyframe window width and an asynchronous adaptive estimator (as per Sect. 2.4) with a minimum window size of 20 keyframes. As broached in Sect. 2.1, when both the camera intrinsic parameters and the camera-to-IMU transform are unknown, an initial batch optimization comprising all poses, landmarks and calibration parameters (but no IMU measurements) runs until its entropy falls below a predetermined threshold, at which point the camera intrinsic calibration is handed over to the self-calibration framework discussed in Sect. 2.2. At this point the IMU initialization procedure is engaged—first separately estimating rotation and translation by solving a linear system, then handing over initial estimates on the camera-to-IMU transform to a batch estimation for refinement. Once the batch camera-to-IMU estimation has fallen below a predetermined entropy, the estimation is passed on to the rolling self-calibrating framework for constant-time estimation. [] Fig. 3. Results of a reconstructed indoor dataset spanning 1200 keyframes and 2972 frames. The priority queue consisted of 5 segments with 30 poses in each segment. Camera-to-IMU translation and rotation estimates (solid blue line), with their 3 sigma bounds (dotted red line). The pseudo ground truth (solid black line), obtained by offline calibration procedures is shown to be close to the online estimates, with average sub-degree rotation error and centimeter-level translation error. The first experiment was run on Rig A, with unknown camera and camera-to-IMU calibration parameters. The camera calibration was initialized to common values: focal lengths [$$f\_x$$] and [$$f\_y$$] were set to [$$90^\circ $$] and the central points [$$c\_x$$] and [$$c\_y$$] were set to half the image width and height, respectively. The initial camera-to-IMU transform is set to identity, i.e. that the sensors are co-located. Figure 3 shows the results of camera-to-IMU estimation of the system running on a sample data-set, in which it can be seen that the priority queue is successfully tracking the offline estimates [17]. Of note are the limited observability of the rotation about the axis of gravity (yaw) and the relatively constant uncertainty. This is due to the fixed number of segments in the priority queue, which can be expanded to include more segments and approximate the batch estimate, at the cost of computational processing. Figure 4 shows the camera intrinsics on the same dataset. A second experiment was performed on Rig B, where only the camera-to-IMU parameters were being estimated, but the position of the IMU was physically changed mid-dataset. This experiment’s results are show in Fig. 5. [] Fig. 4. Self-calibration camera intrinsic parameters. Neither camera intrinsic or camera-to-IMU extrinsic parameters were known. Even with total uncertainty on all calibration parameters at the start, convergence to offline values is observed for both camera intrinsic and extrinsic parameters. [] Fig. 5. Indoor dataset on Rig B, the IMU position was manually changed mid-dataset. Only the y component of translation was changed, all other parameters remained the same. as shown by the pseudo ground truth line (black line). The system automatically detected a change in mean and re-estimated all parameters. 4 Discussion In Fig. 4, a sharp drop is witnessed in uncertainty on all intrinsic parameters around keyframe 820, where a particularly informative segment was swapped into the queue. The same behavior is not witnessed around keyframe 820 for the camera-to-IMU transform estimate in Fig. 3, which strongly suggests the need for different queues for different sensors. Supporting the initialization sequence used for SE(3) transform approximation, Fig. 5 demonstrates rapid convergence to new translation parameters when the sensors are moved with respect to one another on Rig B. The entropy of the priority queue increases temporarily until enough post-change segments are added. Some discrepancies between the offline values and the estimates from the priority queue can be observed (such as on the rotation values in Fig. 3). This can be caused by a number of factors: (1) the offline calibration is only a pseudo-ground truth, and (2) lack of observability of these parameters, especially yaw, since we only use naturally occurring features. Note that the self-calibration sequence we suggest relies on non-degenerate motions that excite the appropriate degrees of freedom so as to render them observable, which we have found to occur naturally in experimental hand-held datasets. A particular failure case is through slow changes of calibration parameters through a data collection. Changes in parameter values are currently induced as a step function; however, if a calibration parameter changes incrementally over time, it will not trigger a change event, as per Sect. 2.3. Instead, new segments with low entropy will be swapped into the priority queue, mixed with past segments that presented a different mean. Another failure case is related to the determinant-based scoring system, which could result in a very low uncertainty for an unobservable parameter. These drawbacks warrant further development of a more robust scoring system. 5 Conclusions and Future Work This paper presents online, constant-time self-calibration and change detection with re-calibration for joint estimation of camera-to-IMU transform and camera intrinsic parameters, using only naturally occurring features. The system is evaluated with experimental data and shown to converge to offline calibration estimates with centimeter level accuracy for camera-to-IMU translation, and sub-degree accuracy for rotation. The statistical change detection framework presented in [9] and summarized in Sect. 2.3 has been extended to the camera-to-IMU transform, including a statistical comparison of distributions over candidate segments for a SE(3) pose. The use of an adaptive conditioning window for re-estimation of past poses allows this framework to operate in long-term applications where the accumulation of linearization errors in a prior distribution would lead to significant drift. We presented a framework that supports adding additional sensors while maintaining online operation. To the authors’ best knowledge this is the first application of multi-sensor self-calibration with automatic change detection and re-estimation of parameters. Acknowledgments This work is generously supported by Toyota Motor Corporation. References 1. Jones, E.S., Soatto, S.: Visual-inertial navigation, mapping and localization: a scalable real-time causal approach. Int. J. Robot. Res. 30(4), 407–430 (2011)CrossRef 2. Kelly, J., Sukhatme, G.S.: Visual-inertial sensor fusion: localization, mapping and sensor-to-sensor self-calibration. Int. J. Robot. Res. 30(1), 56–79 (2011)CrossRef 3. Mourikis, A.I., Roumeliotis, S.I.: A multi-state constraint kalman filter for vision-aided inertial navigation. In: IEEE International Conference on Robotics and Automation, pp. 3565–3572 (2007) 4. Li, M., Mourikis, A.I.: High-precision, consistent EKF-based visual–inertial odometry. Int. J. Robot. Res. 32(6), 690–711 (2013)CrossRef 5. Hesch, J.A., Kottas, D.G., Bowman, S.L., Roumeliotis, S.I.: Towards Consistent Vision-Aided Inertial Navigation. In: Frazzoli, E., Lozano-Perez, T., Roy, N., Rus, D. (eds.) Algorithmic Foundations of Robotics. Springer Tracts in Advanced Robotics, vol. 86, pp. 559–574. Springer, Heidelberg (2013)CrossRef 6. Civera, J., Bueno, D.R., Davison, A.J., Montiel, J.M.M.: Camera self-calibration for sequential Bayesian structure from motion. In: International Conference on Robotics and Automation, pp. 403–408. IEEE (2009) 7. Li, M., Yu, H., Zheng, X., Mourikis, A.I.: High-fidelity sensor modeling and self-calibration in vision-aided inertial navigation. In: International Conference on Robotics and Automation, pp. 409–416. IEEE (2014) 8. Keivan, N., Sibley, G.: Asynchronous adaptive conditioning for visual-inertial SLAM. Int. J. Robot. Res. 34(13), 1573–1589 (2015)CrossRef 9. Keivan, N., Sibley, G.: Online SLAM with any-time self-calibration and automatic change detection. In: International Conference on Robotics and Automation, pp. 5775–5782. IEEE (2015) 10. Keivan, N., Sibley, G.: Constant-time monocular self-calibration. In: Robotics and Biomimetics (ROBIO), pp. 1590–1595. IEEE (2014) 11. Dong-Si, T.C., Mourikis, A.I.: Estimator initialization in vision-aided inertial navigation with unknown camera-IMU calibration. In: Intelligent Robots and Systems, pp. 1064–1071. IEEE (2012) 12. Carlone, L., Tron, R., Daniilidis, K., Dellaert, F.: Initialization techniques for 3D SLAM: a survey on rotation estimation and its use in pose graph optimization. In: International Conference on Robotics and Automation, pp. 4597–4604. IEEE (2015) 13. Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: International Symposium on Mixed and Augmented Reality, pp. 225–234. IEEE (2007) 14. Civera, J., Davison, A.J., Montiel, J.M.M.: Inverse depth parametrization for monocular SLAM. Trans. Robot. 24(5), 932–945 (2008)CrossRef 15. Li, M., Mourikis, A.I.: 3-D motion estimation and online temporal calibration for camera-IMU systems. In: International Conference on Robotics and Automation, pp. 5709–5716. IEEE (2013) 16. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual–inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)CrossRef 17. Autonomous Robotics, Perception Group (ARPG): VICalib visual-inertial calibration suite (2016). https://​github.​com/​arpg/​vicalib © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_67 Experimental Comparison of Open Source Vision-Based State Estimation Algorithms Alberto Quattrini Li¹  , A. Coskun¹  , S. M. Doherty¹  , S. Ghasemlou¹  , A. S. Jagtap¹  , M. Modasshir¹  , S. Rahman¹  , A. Singh¹  , M. Xanthidis¹  , J. M. O’Kane¹   and I. Rekleitis¹   (1) Computer Science and Engineering Department, University of South Carolina, 315 Main, Columbia, South Carolina 29208, USA     Alberto Quattrini Li (Corresponding author) Email: albertoq@cse.sc.edu   A. Coskun Email: acoskun@email.sc.edu   S. M. Doherty Email: dohertsm@email.sc.edu   S. Ghasemlou Email: sherving@email.sc.edu   A. S. Jagtap Email: ajagtap@email.sc.edu   M. Modasshir Email: modasshm@email.sc.edu   S. Rahman Email: srahman@email.sc.edu   A. Singh Email: akanksha@email.sc.edu   M. Xanthidis Email: mariosx@email.sc.edu   J. M. O’Kane Email: jokane@cse.sc.edu   I. Rekleitis Email: yiannisr@cse.sc.edu Abstract The problem of state estimation using primarily visual data has received a lot of attention in the last decade. Several open source packages have appeared addressing the problem, each supported by impressive demonstrations. Applying any of these packages on a new dataset however, has been proven extremely challenging. Suboptimal performance, loss of localization, and challenges in customization have not produced a clear winner. Several other research groups have presented superb performance without releasing the code, sometimes materializing as commercial products. In this paper, ten of the most promising open source packages are evaluated, by cross validating them on the datasets provided for each package and by testing them on eight different datasets collected over the years in our laboratory. Indoor and outdoor, terrestrial and flying vehicles, in addition to underwater robots, cameras, and buoys were used to collect data. An analysis on the motions required for the different approaches and an evaluation of their performance is presented. Keywords Vision based state estimationLocalizationSLAM 1 Introduction One of the most significant challenges in robot autonomy is state estimation, specifically the dual problems of tracking the pose of the robot as it moves through its environment and of mapping that environment as the robot moves. In the last decade, the wide availability of camera sensors, coupled with progress in computer vision, has given rise to a variety of vision-based techniques for these problems, known as visual odometry or visual SLAM. Scaramuzza and Fraundorfer [11, 28] presented a comprehensive overview this work, from the fundamentals of Visual Odometry to recent research challenges and applications. Fuentes-Pacheco et al. [12] recently surveyed Visual SLAM methods. Vision based state estimation can be divided into a few broad appoaches. One line of research uses probabilistic filters, such as the Extended Kalman Filter (EKF), to fuse visual features with other data. For example, some influential works that fuse data from a camera and an inertial measurement unit (IMU) include those of Mourikis and Roumeliotis [25], Jones and Soatto [17], and Kelly and Sukhatme [18]. Another group of approaches builds on Structure from Motion (SfM) methods and Visual Odometry (VO), in which images are processed to extract features to be tracked, and the poses are estimated by minimizing the re-projection error deriving from the reconstruction of such tracked features. Such approaches include the work of Davison et al. [7] on real time accurate 3D structure reconstruction and motion estimation of a monocular camera, moving in a constrained indoor space. Konolige et al. [20], Furgale and Barfoot [13] have shown real-time visual odometry systems that are capable of accurately localizing terrestrial robots over tens-of-kilometers-long trajectories. Computationally expensive global optimization schemes, often termed bundle adjustment (BA) [24, 31], can also be used. BA can be further subdivided by whether features (sparse methods) or pixel intensities (direct methods) are considered for tracking. In the recent years, several open source software packages for visual state estimation have become available, each supported by impressive demonstrations. However, the comparative evaluation of such methods, when available, is usually limited to only a few of them at a time, e.g., [32], making it difficult to select a reliable and robust method. Also, due to both algorithmic limitations, such as, number of and sensitivity to parameters, special initialization motions, etc., and software engineering challenges, such as, diverse input formats, undisclosed software dependencies, etc., applying these packages on new datasets can be remarkably difficult. In addition, several other research groups have presented superb performance without releasing the code, sometimes materializing as commercial products—e.g., [16], thus making it hard to evaluate and use. The objective of this paper is to bring clarity to the landscape of visual state estimation software. Specifically, we evaluate eleven open source packages on eight new datasets. The datasets span a variety of environments (including indoor, outdoor, and underwater) and vehicle types (including terrestrial, airborne, marine surface, and underwater platforms). We present an analysis on the motions required for each approach, together with an evaluation of their performance on each dataset. The main contribution of this paper is to provide, based on this analysis, insights on which package to choose according to the problem at hand, and to highlight some of the open challenges that are still not fully addressed. Good practices for producing replicable results are also discussed. This paper is structured as follows. The next section briefly describes the tested algorithms. Section 3 shows the datasets used in the evaluation. Section 4 presents the results and Sect. 5 discusses them, concluding the paper. 2 Methods Evaluated Tables 1 and 2 list the open source vision-based state estimation packages analyzed in this paper together with a qualitative evaluation on different datasets. This section briefly introduces each of those methods, without any attempt to provide a comprehensive discussion of their details; please refer to the original papers for more information. Kalman filter-based methods: MonoSLAM [5] is based on an incremental EKF, where the state contains the map and the camera pose. The state vector is updated with the prediction step, assuming a constant motion model that follows a Gaussian profile. The update is performed according to measurements derived from the detected features in the images. Feature points are detected with an active search algorithm that restricts the search space to the most probable area, according to a window and an estimated motion. SfM-based methods: Packages that based on the Structure from Motion (SfM) approach include libVISO [15], which is a library that provides a sparse visual odometry method. Parallel Tracking and Matching (PTAM) [19], which is also a sparse method, is designed for augmented reality applications in small work spaces. It works with input images from a monocular camera. PTAM performs state estimation in two steps. First, a tracking phase, in which new frames are compared with the current map using features. Second, a map updating phase, which utilizes a set of keyframes. An initialization phase where the same features are seen from different point of views is required. ORB-SLAM [26] is a monocular SLAM system, with a recent extension to stereo visual input, that uses ORB features for tracking, mapping, relocalizing, and loop closing. Semi-direct Visual Odometry (SVO) [10] extracts features only when a new keyframe is added to the map and matches the features in the successive frames as an implicit result of direct motion estimation. Outliers are filtered out with a Bayesian filter. Large-Scale Direct Monocular SLAM (LSD-SLAM), instead of using key- points, operates on intensities of images from a monocular camera [9], both for tracking and mapping, allowing a dense 3D reconstruction. Finally, RatSLAM [2] takes inspiration from the neural processes in rodent brains for navigation. Given images from a monocular camera and odometric information, the method matches scenes according to their appearance and constructs a semi-metric topological map. Global optimization methods: Some of the above real-time solutions utilize global optimization packages to smooth the resulting trajectories. The open source packages: g [$$^2$$] o [21] and Ceres [1] are both graph optimization frameworks working with nonlinear error functions. They can model and efficiently solve large optimization problems. A very recent solution that involves a complete visual pipeline is COLMAP [29] that allows a reconstruction of ordered or un-ordered sets of image. It utilizes the Ceres [1] framework over the whole set of images, resulting in impressive, albeit very slow, reconstructions of the camera trajectory and the environment. 3 Experimental Datasets Although standard datasets are important for reproducibility and repeatability in experimental evaluation, existing datasets for state estimation typically capture only a single scenario, such as a university campus (e.g. Rawseeds [4]) or an urban environment (e.g. Kitti [14]). [] Fig. 1. Characteristic images from the evaluated datasets. Top row from left: UGV outdoors, UGV indoors, UAV outdoors, UAV indoors; Bottom row from left: AUV over a coral reef, AUV inside a wreck, Drifter, Camera moved manually underwater. To test the visual state estimation packages discussed above on a richer set of scenarios, we collected datasets in the form of ROS bag files¹ in different environments using a diverse set of robotic platforms: - UGV outdoor (H/Out) and indoor (H/In): A Clearpath Husky unmanned ground vehicle (UGV), equipped with GPS, IMU, and monocular camera (30fps, [$$640 \times 480$$]), moving both outside and inside a building at the University of South Carolina campus. The camera was mounted forward facing and lateral facing in different experiments. - UAV outdoor (Q/Out) and indoor (Q/In): A Parrot AR-drone 2.0 quadrotor, with front (30fps, [$$640 \times 360$$]) and bottom cameras (60fps, [$$320 \times 240$$]) and an IMU, in the same environment as above. The forward facing camera was used for the evaluation. During the indoor experiments, the UAV experienced several abrupt rotations which resulted in loss of localization in most of the packages. - AUV over coral reefs and inside a shipwreck: An Aqua2 autonomous underwater vehicle (AUV), equipped with an IMU and a forward facing camera (15fps, [$$870 \times 520$$]), operating off the coast of Barbados. - Drifter: A custom made passive drifter [3] equipped with GPS, IMU, and a 10fps [$$640 \times 480$$] camera, deployed also off the coast of Barbados. The camera is downward facing and the motion of the asset was caused only by the wave action. The bobbing motion of the camera resulted in an expanded field of view up to 120[$$^\circ $$]. The low quality of the camera, the constantly changing lighting condition, and the continuous rotations made this dataset the most challenging among all. - Manual underwater: A pair of GoPro Hero3+ cameras (30fps, [$$1920 \times 1080$$]) in a 3D Dual Hero System stereo configuration, deployed off the coast of Barbados. The stereo rig was operated by a diver inspecting inside and around shipwrecks and coral reefs. Table 1. Qualitative Analysis: Performance of the different open source packages using the provided datasets from every other package. The legend is as follows: red–failure, i.e., the algorithm does not localize the robot with the tested parameters; orange–partial failure, i.e., the algorithm is able to track the robot in portions of the trajectory; yellow–partial success, i.e., the algorithm is able to track the robot until the end, but the trajectory contains some errors; green–success, i.e., the method produces an accurate trajectory. [] The datasets together with detailed instructions on the usage of each package can be found online at http://​afrl.​cse.​sc.​edu/​afrl/​resources/​datasets/​ so future packages could be tested and evaluated. 4 Results The software packages described above were evaluated using the provided datasets from each package (cross-validation) and also the eight datasets discussed above. The tests were performed on a computer equipped with an Intel i7-4770 3.4 GHz CPU, 16 GB RAM, under Ubuntu 14.04 and ROS Indigo Igloo. The cameras were calibrated and the intrinsic parameters were provided to each package. In addition, the specific parameters of all packages were manually tuned for each dataset. The parameters were initially set to the package’s default values, and tuned to improve the performance. All available suggestions from the packages’ authors for parameter selection were followed. To test the global optimization frameworks, as they do not provide a complete SLAM system, input graphs were obtained by saving the pre-optimized resulting graph at the end of the best run of ORB-SLAM, which already relies on g[$$^2$$]o for local optimization. Repeated trials were conducted for each package-dataset pair; we report the best observed result for each pair from all the trials. Table 2. Qualitative analysis: performance of the different open source packages using the new datasets. Datasets: Husky outdoors (H/Out); Husky indoor (H/In); Quadrotor outdoor (Q/Out); Quadrotor indoor (Q/In); Aqua on coral reef (A/Out); Aqua inside wreck (A/In); drifters on coral reef (D/UW); GoPro stereo on the outside of a shipwreck (G/UW). The legend is as in Table 1. [] Table 1 shows a qualitative summary of the cross-validation experiments which test each package against the datasets provided by every other package. The cell colors indicate performance, utilizing the best parameters arising after extensive tuning. Green illustrates that the results were accurate. Yellow means that the robot was localized for the whole experiment, but the resulting trajectory deviated significantly from the general structure of the observed behavior. Orange shows that the method tracked the robot pose in some portions of the trajectory. Red indicates that the package was not able to localize the robot. The majority of the datasets provided have short duration, usually covering a small workspace inside a lab. Table 2 presents a qualitative summary of the results from the eight diverse datasets collected by the authors. The same colors were used as in the previous table. In several occasions packages exhibited different performance in repeated trials under identical conditions. In all cases the best performance was used. In addition, for PTAM the datasets were tested to provide a starting point which resulted in initializing the tracking and the package was evaluated using the hand-tuned (trimmed) trajectory. Figure 2 shows examples of trajectories from the H/Out and H/In datasets, for each package rated Yellow or Green on that dataset. [] Fig. 2. Trajectories resulting from the tested methods in H/Out and H/In, together with the GPS trace (outdoor) and gmapping (laser-based) trajectory (indoor). Finally, Table 3 shows quantitative results evaluating the produced trajectory of each package for select datasets where a good estimate of the trajectory is available from other sources (GPS or LIDAR sensor). That trajectory is used as ground truth. In particular, for H/Out the GPS information is available, while for H/In the ground truth trajectory was obtained using gmapping² on the odometric, inertial, and LIDAR data. The metrics considered are: - Er the accuracy, measured in terms of error between ground truth and the produced trajectory [22]. In particular, the metric is based on the relative displacement between robot poses. More formally, the error of a trajectory [$$x\_{1:T}$$] with respect to the ground truth trajectory [$$x\_{1:T}^\*$$] is calculated as: [$$\epsilon (\delta ) = \frac{1}{N} \sum \_{i,j} \textit{trans}(\delta \_{i,j}\ominus \delta \_{i,j}^\*)^2 $$] where [$$\delta \_{i,j}$$] and [$$\delta \_{i,j}^\*$$] are the relative relation between two consecutive poses at time i, j for the estimated trajectory and ground truth trajectory, respectively, N is the number of relative relations, and trans considers the translation component. The error is reported in meters - TL track loss percentage, that is the ratio between the time in which the system is not localized and the total time of the dataset; lower numbers are better. - Mem the maximum amount of memory used by the package during a run; reported in megabytes (MB). Note that as a monocular setup is considered in almost all the packages, a post-processing step is performed on the produced trajectory to fit/align to the ground truth minimizing the distance between them. In particular, the vision-based trajectory is rotated and scaled in order to coincide with the ground truth trajectory, at least at the starting moments. Some of the packages failed finding a trajectory, thus the resulting error displays a very large value. In H/Out[$$\_1$$], the robot traveled outdoor in the grass with bushes and trees, while in H/Out[$$\_2$$] the robot was moving on the sidewalk. Images in H/In were collected inside the Computer Science and Engineering department. Table 3. Quantitative Evaluation of the different open source packages for the selected datasets with ground truth. Er measures the accuracy of the trajectory and is reported for packages which were partially successful, TL is the percentage of track loss, and Mem is the maximum memory usage. N/A stands for not applicable, e.g., calibration parameters were not reported for a dataset. [TABLE] [$$^1$$] The error reported is only for a large part of the trajectory ORB-SLAM is the package that provides the best result in terms of accuracy among the sparse methods, and using g[$$^2$$]o at the very end of the dataset does not improve much the trajectory, highlighting its reliability. MonoSLAM is not able to localize the robot for most of the trajectory. Packages perform better in structured datasets (H/Out[$$\_2$$] and H/In) than unstructured ones, because features can be more easily identified. Memory usage does not show any specific pattern considering the different classes of visual SLAM methods, although for most of them it grows linearly over time. The difference between online and offline approaches is illustrated in Fig. 3 which shows the results from ORB-SLAM and COLMAP for one dataset collected outdoors using a Husky UGV. The global optimization method provides visually better results compared to the realtime one; however, it took more than a day for COLMAP to find the presented solution. [] Fig. 3. Resulting trajectories/reconstruction from (a) ORB-SLAM; (b) COLMAP. 5 Main Experimental Insights Comparing the behavior and performance of such a diverse set of vision based estimation packages provided multiple insights. One of the main challenges is to find the fine balance between computational efficiency and result accuracy. Many parameters, such as the number of tracked features and the number of RANSAC iterations, can improve the accuracy at the expense of added computational load. A slight change in some of the parameters could lead to very different behaviors. Some of the packages, such as SVO, restricted the operating space to a small area during their demonstrations. This allows the method to produce very good trajectories in a limited workspace, as it is possible to run a global optimization algorithm on all of the keyframes in the map. As a result, SVO was only able to track the trajectory in the tested datasets partially. The cross validation and the new datasets results show that in testing a proposed approach more challenging scenarios should be considered to validate it. Indeed, most of the attached datasets with the packages are from experiments performed inside a single laboratory, many times just over a single desk. The images’ quality is another important factor influencing the results. The quality depends on the amount of texture in the images, illumination variations, and the presence of blur, both out of focus and motion blur. As most packages tested rely on tracking features, the quality of the detected features depends on the image quality. For example, sharp rotations are a type of motion that authors of some packages, such as ORB-SLAM, suggested to avoid as it could result in losing track of the detected features. As a matter of fact, the most successful package, ORB-SLAM, failed for Q/Out, which contains continuous rotations. Moreover, many packages failed in the underwater datasets, due to the difficult visual conditions, which led to features not detected and also to several wrong loop closures. This is especially true for the dataset from the drifting sensors, in which the camera has the lowest frame-rate compared to the other datasets. Note that, since monocular cameras cannot recover depth from a single frame, one open issue affecting the performance of methods working with monocular images is the initialization step. Some packages explicitly reported a required initial motion to initialize the SLAM algorithm. In many vehicles such motion might not be feasible for the robotic platform—e.g., PTAM requires an initial translation along the x-axis of the camera, however, many robotic platforms have forward-facing cameras to enable navigation and lateral motion is not possible. In H/Out, PTAM succeeded, because the camera was rotated to face laterally. Furthermore, for several online packages, an inconsistent behavior was observed in the results between successive runs of the same dataset with the same parameters; a behavior reported in the papers. For example, H/In resulted in repeated failures of ORB-SLAM before producing an accurate trajectory and scene reconstruction. There are several causes, including the realtime constraint, where some of the frames could be dropped according to the load of the computing unit, and the random nature of RANSAC. RatSLAM, utilizes a learning process for adjusting how neurons are triggered, thus improving the trajectory as the robot visits the same place multiple times; e.g., in Q/Out it is able to produce a good result, given the spiral motion. Global optimization methods improve the resulting trajectory; e.g., running g[$$^2$$]o on the complete graph from ORB-SLAM on H/In, the [$$\chi $$]-squared test showed an improvement from [$$\chi ^2=183068$$] to [$$\chi ^2<10^{-9}$$]. However, being an expensive operation, ORB-SLAM usually runs g[$$^2$$]o only on a fixed number of keyframes. It is interesting to note that, if a general optimization frameworks is tailored for a specific package, such as in ORB-SLAM, the number of iterations required for convergence drops—e.g., for g[$$^2$$]o used in conjunction with ORB-SLAM, it takes on average in the order of tens of iterations, while using Ceres “straight out of the box” takes tens of thousands of iterations. COLMAP, which provides a complete pipeline for SfM problems utilizing Ceres, shows very promising results, although the time to get the estimated trajectory can be very long—e.g., for 700 images, 7–8 h. In addition to the packages reported above, several more packages were tested. In particular, preliminary tests of the following global optimization packages: Bundler [30], SBA [24], parallaxBA [33], and GTSAM [8] did not produce acceptable results. In particular in most cases they failed to reliably track features for most of the datasets and the global optimization converged into a local minima. Ongoing work includes the study of the effects of changing parameters, collection of data focusing on different type of motions, and the investigation of more open-source packages on the same datasets, including DTAM [27], DPPTAM [6], OKVis [23]. Acknowledgment The authors would like to thank the generous support of the Google Faculty Research Award and the National Science Foundation grants (NSF 0953503, 1513203, 1526862, 1637876). References 1. Agarwal, S., Mierle, K., Others: Ceres Solver (2015). http://​ceres-solver.​org 2. Ball, D., Heath, S., Wiles, J., Wyeth, G., Corke, P., Milford, M.: OpenRatSLAM: an open source brain-based SLAM system. Auton. Robot. 34(3), 149–176 (2013)CrossRef 3. Boydstun, D., Farich III., M., J.M., Rubinson, S., Smith, Z., Rekleitis, I.: Drifter sensor network for environmental monitoring. In: 12th Conference on Computer Robot Vision, pp. 16–22, June 2015 4. Ceriani, S., Fontana, G., Giusti, A., Marzorati, D., Matteucci, M., Migliore, D., Rizzi, D., Sorrenti, D.G., Taddei, P.: RAWSEEDS ground truth collection systems for indoor self-localization and mapping. Auton Robot. 27(4), 353–371 (2009)CrossRef 5. Civera, J., Grasa, O.G., Davison, A.J., Montiel, J.M.M.: 1Point RANSAC for extended kalman filtering: application to real-time structure from motion and visual odometry. J. Field Robot. 27(5), 609–631 (2010)CrossRef 6. Concha, A., Civera, J.: DPPTAM: dense piecewise planar tracking and mapping from a monocular sequence. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (2015) 7. Davison, A., Reid, I., Molton, N., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)CrossRef 8. Dellaert, F., Kaess, M.: Square root SAM: simultaneous localization and mapping via square root information smoothing. Int. J. Robot. Res. 25(12), 1181–1203 (2006)CrossRefMATH 9. Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Heidelberg (2014). doi:10.​1007/​978-3-319-10605-2\_​54 10. Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: IEEE International Conference on Robotics and Automation, pp. 15–22 (2014) 11. Fraundorfer, F., Scaramuzza, D.: Visual odometry: part II: matching, robustness, optimization, and applications. IEEE Robot. Autom. Mag. 19(2), 78–90 (2012)CrossRef 12. Fuentes-Pacheco, J., Ruiz-Ascencio, J., Rendón-Mancha, J.M.: Visual simultaneous localization and mapping: a survey. Artif. Intell. Rev. 43, 55–81 (2015)CrossRef 13. Furgale, P.T., Barfoot, T.D.: Stereo mapping and localization for long-range path following on rough terrain. In: ICRA, pp. 4410–4416 (2010) 14. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the KITTI dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)CrossRef 15. Geiger, A., Ziegler, J., Stiller, C.: Stereoscan: dense 3D reconstruction in real-time. In: Intelligent Vehicles Symposium (IV) (2011) 16. Hesch, J., Kottas, D., Bowman, S., Roumeliotis, S.: Consistency analysis and improvement of vision-aided inertial navigation. IEEE Trans. Robot. 30(1), 158–176 (2014)CrossRef 17. Jones, E.S., Soatto, S.: Visual-inertial navigation, mapping and localization: a scalable real-time causal approach. Int. J. Robot. Res. 30(4), 407–430 (2011)CrossRef 18. Kelly, J., Sukhatme, G.S.: Visual-inertial sensor fusion: localization, mapping and sensor-to-sensor self-calibration. Int. J. Robot. Res. 30(1), 56–79 (2011)CrossRef 19. Klein, G., Murray, D.: Parallel tracking and mapping for small ar workspaces. In: IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234 (2007) 20. Konolige, K., Agrawal, M., Solà, J.: Large scale visual odometry for rough terrain. In: International Symposium on Research in Robotics (ISRR), November 2007 21. Kummerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: [$$g^2o$$]: a general framework for graph optimization. In: IEEE International Conference on Robotics and Automation, pp. 3607–3613 (2011) 22. Kümmerle, R., Steder, B., Dornhege, C., Ruhnke, M., Grisetti, G., Stachniss, C., Kleiner, A.: On measuring the accuracy of SLAM algorithms. Auton. Robot. 27(4), 387–407 (2009)CrossRef 23. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., Furgale, P.: Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 34(3), 314–334 (2015)CrossRef 24. Lourakis, M.A., Argyros, A.: SBA: a software package for generic sparse bundle adjustment. ACM Trans. Math. Softw. 36(1), 1–30 (2009)MathSciNetCrossRef 25. Mourikis, A.I., Roumeliotis, S.I.: A multi-state constraint Kalman filter for vision-aided inertial navigation. In: IEEE International Conference on Robotics and Automation, pp. 3565–3572. IEEE (2007) 26. Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRef 27. Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: International Conference on Computer Vision. ICCV, Computer Society, pp. 2320–2327. IEEE, Washington, DC (2011) 28. Scaramuzza, D., Fraundorfer, F.: Visual odometry [tutorial]. IEEE Robot. Autom. Mag. 18(4), 80–92 (2011)CrossRef 29. Schönberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: IEEE Conference on Computer Vision and Pattern Recognition (2016) 30. Snavely, N., Seitz, S.M., Szeliski, R.: Modeling the world from internet photo collections. Int. J. Comput. Vis. 80(2), 189–210 (2008)CrossRef 31. Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W.: Bundle adjustment – a modern synthesis. In: Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms, Corfu, Greece, pp. 298–372 (2000) 32. Williams, B., Cummins, M., Neira, J., Newman, P., Reid, I., Tardós, J.: A comparison of loop closing techniques in monocular SLAM. Robot. Autonom. Syst. 57(12), 1188–1197 (2009)CrossRef 33. Zhao, L., Huang, S., Sun, Y., Yan, L., Dissanayake, G.: Parallaxba: bundle adjustment using parallax angle feature parametrization. Int. J. Robot. Res. 34(4–5), 493–516 (2015)CrossRef Footnotes 1 http://​wiki.​ros.​org/​Bags.   2 http://​wiki.​ros.​org/​gmapping.   Human-Robot Interaction 2 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_68 Human Pose Estimation from Imperfect Sensor Data via the Extended Kalman Filter Vlad Joukov¹  , Rollen D’Souza¹   and Dana Kulić¹   (1) Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Canada     Vlad Joukov Email: vjoukov@uwaterloo.ca   Rollen D’Souza Email: rs2dsouz@uwaterloo.ca   Dana Kulić (Corresponding author) Email: dana.kulic@uwaterloo.ca Abstract Accurate human pose estimation is of vital importance for a variety of human-robot interaction applications, including cooperative task execution, imitation learning and robot-assisted rehabilitation. As robots move from controlled indoor environments to unstructured and outdoor environments, the ability to accurately measure human pose without fixed sensors becomes increasingly important. In this paper, we present a general framework for accurately estimating human pose from a variety of sensors, including body-worn inertial measurement units and cameras, that can be used in indoor and outdoor environments to accurately estimate human pose during arbitrary 3D movements. Using a kinematic model of the human body, the sensor data is fused to estimate the body joint angles and velocities using a constrained Extended Kalman Filter which automatically incorporates feasible joint limits. For periodic movement such as gait, performance can be further improved via online learning of the gait model, individualized to the user. The proposed approach can deal with intermittent data availability and measurement errors during highly dynamic movements. Keywords Human pose estimationMotion captureExtended Kalman Filter This work was supported in part by Canada’s Natural Sciences and Engineering Research Council. 1 Introduction Accurate human pose estimation is of vital importance for a variety of human-robot interaction applications, including cooperative task execution, imitation learning and robot-assisted rehabilitation [1]. The gold standard for human motion data collection is marker-based motion capture, based on either optical or magnetic marker technology. A number of commercial solutions are available, e.g. [2, 3]. With optical motion capture systems, small reflective markers are attached to body landmarks, and observed with a set of cameras. The marker images and exact knowledge of the relative placements of the cameras are used to extract accurate 3D positions of each marker, achieving positioning sub millimeter positioning accuracy [2]. The marker positions coupled with a kinematic model of the human body and an association between each marker and the corresponding link in the model are then used to estimate the body pose. However, even with the high accuracy of marker positioning reported by the system manufacturers, the quality of pose estimation is frequently corrupted by three common measurement issues: (1) temporary marker occlusions, (2) unlabeled markers, and (3) mis-labeled markers. Temporary marker occlusions occur when no camera has a direct line of sight to the marker, and can be caused by either occluding objects in the scene or by self occlusion. Unlabeled markers appear when a marker is measured by one or more cameras in the capture scene, but cannot be associated with any of the known markers in the kinematic model. Mis-labeled markers, also known as marker swapping, occur when markers are incorrectly associated to known markers in the kinematic model. This issue occurs frequently when many markers are in a small volume, for example during close contact between demonstrators or fine hand/finger movements when the hands and fingers are brought together. These issues can be systematically addressed during off-line post processing, but for interactive applications, on-line pose estimation is required. Aristidou and Lasenby [4] proposed an approach for missing marker handling via marker position prediction, based on previously known marker positions and rigid body assumptions. Meyer et al. [5] proposed an approach for on-line marker labeling for full-body motion capture based on a custom initialization and a probabilistic iterative estimation procedure. Maycock et al. [6] propose an approach for hand movement tracking using unlabeled markers, based on a Global Nearest Neighbor (GNN) approach. Marker occlusions are handled using interpolation. Mandery et al. [7] propose an approach for pose estimation using unlabeled marker measurements using the smart sampling Kalman filter. While marker-based systems provide high accuracy in indoor settings, they require extensive camera setup and calibration, as well as line-of-sight visibility between the cameras and the markers, implying a restricted capture space. For many practical applications, these requirements are too restrictive. Recently, alternatives to camera-based motion capture based on body-worn Inertial Measurement Units (IMUs) have been proposed [8]. While IMUs enable capture in arbitrary environments, they suffer from gyro drift and poorer pose estimation accuracy than marker-based systems. Kalman filter approaches coupled with a kinematic model of the body have frequently been applied for IMU-based pose estimation, to fuse the accelerometer and gyroscope measurements and deal with gyro drift [9, 10]. In this paper, we propose a general framework for human pose estimation based on the Extended Kalman Filter (EKF) and a skeleton model of the body. Using the motion model and state and observation covariances estimated by the EKF, we automatically eliminate missing or incorrectly measured markers, perform marker matching and pose estimation, handle joint limits and sensor measurement noise and bias. The proposed approach has been extensively validated with a variety of dynamic movements and demographics. 2 Proposed Approach 2.1 Problem Statement We model the human body with an articulated rigid body skeleton, where adjacent bones are articulated via a set of N joint angles [$$q\_i, i \in 1...N$$]. The joint vector additionally includes the 6 dimensional pose of the body relative to the inertial frame. For the case of marker-based motion capture, our measurement consists of [$$M\_k$$] marker Cartesian positions and velocities, the number of which may be different at each time step k. [$$ \mathbf {z}^m\_k = \left[ \begin{array}{cc} \mathbf {y\_{M\_k}} \\ \mathbf {\dot{y}\_{M\_k}} \\ \end{array} \right] $$] where [$$\mathbf {y\_{M\_k}}$$] is the vector of Cartesian marker positions, and [$$\mathbf {\dot{y}\_{M\_k}}$$] is the vector of Cartesian marker velocities. For IMU based motion capture, IMUs are attached to a set of body limbs, and our measurement consists of angular velocities and linear accelerations measured at each IMU: [$$ \mathbf {z}^i\_k = \left[ \begin{array}{cc} \mathbf {\omega \_{K}} \\ \mathbf {a\_{K}} \\ \end{array} \right] $$] where K is the number of IMUs, [$$\mathbf {\omega \_{K}}$$] is the vector of angular velocity and [$$\mathbf {a\_{K}}$$] is the vector of linear acceleration measurements in the local IMU frame. Given the observations at frame k, our objective is to estimate the pose [$$q\_k$$]. Additionally, in the case of marker measurements, to deal with missing and incorrectly labeled or unlabeled markers, we must first associate the observed markers with the corresponding skeleton location, and discard incorrect markers. 2.2 The Extended Kalman Filter Formulation The EKF state consists of the joint positions, velocities and accelerations, [$$\mathbf {x} = [\mathbf {q}\, \mathbf {\dot{q}}\, \mathbf {\ddot{q}}]^T$$]. We assume a constant acceleration state evolution model, such that [$$ \mathbf {x}\_{k+1} = \left[ \begin{array}{ccc} 1 &{} dt &{} dt^2/2 \\ 0 &{} 1 &{} dt \\ 0 &{} 0 &{} 1 \end{array} \right] \mathbf {x}\_{k} + \mathbf {w}\_k $$] where [$$\mathbf {x}\_{k}$$] is the vector of joint angles, velocities and accelerations at time k, dt is the sampling time interval and [$$\mathbf {w}$$] is the process noise, assumed to be zero-mean Gaussian noise with covariance [$$Q\_k$$]. The observations are related to the state via the non-linear forward kinematics, [$$\mathbf {z} = h(\mathbf {x})$$]. To perform state estimation, the forward kinematics is linearized about the current operating point. [$$ \mathbf {z}\_k = \left[ \begin{array}{cc} J \end{array} \right] \mathbf {x}\_k + \mathbf {v}\_k $$] where J is the Jacobian of the measurement equation with respect to the state [$$\mathbf {x}$$], and [$$\mathbf {v}$$] is the observation noise, assumed to be zero-mean Gaussian noise with covariance [$$R\_k$$]. The unconstrained EKF formulation described above can lead to joint angle estimates which are not physically feasible. To ensure that the estimated joint angles remain physically feasible, the joint angles are constrained to remain within joint limits, by restricting the Kalman gain to ensure that the updated state remains within the constraints [11, 12]. In IMU based estimation, another significant source of error is the gyroscope drift, particularly about rotation axes parallel with gravity, i.e., the yaw rotation in the world frame. With some apriori knowledge of the motion, we can estimate the mean yaw angle of links with respect to the torso or the world frame. Placing a virtual yaw sensor on a link and assuming it always takes this mean as measurement effectively prevents drift from accumulating [13]. The measurement noise covariance of the virtual sensor can be used as a tuning parameter to allow for accurate yaw motion estimation while reducing the effect of drift. Figure 1 shows the virtual yaw sensor drift correction for a gyroscope with [$$0.01\,rad/s$$] bias experiencing sinusoidal motion about the world z axis. [] Fig. 1. Tracking of a single joint model with an IMU sensor experiencing sinusoidal motion about the world z axis. EKF continues to integrate the bias ([$$0.01\,rad/s$$]) in the gyroscope measurement accumulating error in the joint angle estimate. A virtual yaw sensor is added with a mean measurement of 0, it does not allow the bias error to accumulate and maintains accurate motion estimation. For the case of periodic movement, the assumption of constant acceleration can be removed by learning an individualized model of the periodic movement over time, to more accurately model the acceleration. This approach explicitly models the state as a parameterized sum of sinusoids, and learns the model parameters during online observation of the movement. The learned model enables drift free integration of the velocity and acceleration measurements, and improves pose estimation during periodic movement such as gait [13]. At each time step, the EKF estimates the measurement covariance [$$P\_k$$], based on the Kalman filter update equations [14]. 2.3 Incorrect and Missing Marker Detection To allow for incorrect and missing markers during online measurement, at each time step, the probability distribution of the location of each model marker is predicted using the current state estimate, the observation model, and the measurement covariance. For each observed marker, the probability that the marker is generated by the model probability distribution is computed. The observed marker with the highest probability is associated with the model marker. This approach simultaneously deals with swapped and unlabeled markers. To handle missing markers, the observation vector size varies at each time step, based on the number of markers observed. Missing markers therefore do not contribute to the measurement update; only the observable markers and the state evolution model are used to estimate the joint pose. Information from the observable markers and the estimates of the joint pose and velocity allow the filter to smoothly deal with temporary marker occlusions. [] Fig. 2. Sequence of frames of sample sequence: two actors interact and one actor then falls to the ground, occluding the posterior markers. The actor remains on a mattress for around twenty seconds before standing up. Visible, attached markers shown as red boxes. Cyan boxes refer to markers that are missing. Yellow boxes refer to identified mislabeled markers. 3 Experimental Results The proposed approach based on marker data is evaluated with a dynamic motion capture dataset with significant occlusions, marker-swapping and unlabeled markers. Figure 2 illustrates an exemplar sequence, consisting of two actors – both outfitted with markers – interacting briefly followed by an acted fall, with significant marker occlusions during ground contact. We do not assume that any labels are correct due to the possible swaps and frequent occlusions in the raw data, and the procedure described in Sect. 2.3 is applied to all marker measurements at each time step to perform labeling prior to pose estimation. In the initial frame, observed markers are first assigned to the actors – and their associated model markers – of the motion capture scene. An example sequence illustrating marker trajectories and the recovered pose is shown in Fig. 3. [] Fig. 3. A sample marker trajectory (a) illustrating the position of an occluded and mislabeled marker, together with the corresponding recovered joint angle trajectory (b). One standard deviation (covariance estimate by EKF) is shown as the filled region surrounding the estimate. Estimates made by EKF (solid) and observed marker data (thick dotted, if visible) are both shown. The proposed approach is compared to off-line pose estimation and a Jacobian inverse based approach [15]. For off-line pose estimation, pre-processing is applied to fill-in missing marker data, remove spurious markers and correct any mislabeling, and the pose is then estimated using global optimization. The Root Mean Squared Error (RMSE) between the measured marker positions and their estimates based on the recovered joint angles and the forward kinematics are compared in Table 1. Table 1. Root mean squared errors (cm) of the off-line, Jacobian inverse and the proposed EKF-based methods. Standard deviation of errors (cm) are in parentheses. The marker that was visible for most of the dataset was chosen on each extremity for these calculations. +----------------+----------------+--------------+-------------+ | Marker subset | Post-processed | Jacobian | EKF | +:===============+:===============+:=============+:============+ | Left Hand | 3.95 (1.87) | 3.65 (0.38) | 2.34 (0.25) | +----------------+----------------+--------------+-------------+ | Right Hand | 3.47 (1.51) | 14.46 (2.58) | 2.11 (0.33) | +----------------+----------------+--------------+-------------+ | Left Foot | 1.79 (0.18) | 2.88 (0.64) | 2.60 (0.13) | +----------------+----------------+--------------+-------------+ | Right Foot | 2.00 (0.33) | 4.28 (1.12) | 2.56 (0.45) | +----------------+----------------+--------------+-------------+ | Head | 5.43 (2.20) | 3.06 (0.37) | 2.84 (0.18) | +----------------+----------------+--------------+-------------+ | Right Shoulder | 3.51 (1.83) | 9.60 (0.86) | 1.61 (0.05) | +----------------+----------------+--------------+-------------+ | Left Shoulder | 2.52 (1.15) | 3.09 (0.76) | 3.96 (0.56) | +----------------+----------------+--------------+-------------+ [] Fig. 4. Marker jumps in the raw data impact the continuity of joint angle velocities produced by the Jacobian inverse approach (bottom). The proposed approach (top) preserves continuity by throwing away data that is unlikely to match the prediction made by EKF. As can be seen from Table 1, the proposed method significantly outperforms the Jacobian-based approach, and achieves performance comparable to the off-line method. In particular, performance for the right hand is significantly improved; the right hand is occluded for over ten seconds followed by mislabeling of the marker which the Jacobian inverse based approach cannot recover from. The proposed method detects the incorrect label and reassigns the marker appropriately. Also noteworthy is the standard deviation of the error, shown in Table 1 within parentheses. The proposed approach generally minimizes this deviation, indicating smoother and more consistent generated motion as opposed to that of the Jacobian inverse approach. One possible reason for this can be the marker jumps found in the raw data. Marker jumps are where markers erroneously jump a few meters yet remain visible. Since the Jacobian inverse approach cannot effectively determine which data to reject, it generates visible discontinuities in the resulting joint angle velocities, shown in Fig. 4. In addition to rejecting these jumping markers the proposed approach may temporarily assign a nearby, unassigned marker as per Sect. 2.3. Figure 5 illustrates the constrained estimation, taking into account joint limits for the elbow joint, which is limited by a 150[$$^\circ $$] range. The over-extension of the elbow joint is prevented during the occlusion time frames. The proposed IMU only approach was compared to the marker based EKF. Participants were asked to march in place while wearing IMU sensors at the waist, thighs, and ankles. Three markers were placed on each IMU to estimate their orientation with respect to the limbs. Markers were also placed on the ankles, knees, and hips; these were used to estimate the joint centers and link lengths to generate the kinematic models as well as for ground truth inverse kinematics using the proposed marker EKF approach. Figure 6 shows the right knee joint angle estimation of the IMU EKF and Periodic-EKF [13] approaches compared to the marker based estimation, the proposed methods achieve an accuracy of 2.79 and 2.07 degrees RMSE respectively. IMU only estimation achieves accuracy comparable to that of camera based motion capture and can be utilized in environments for which camera based systems are not suitable. [] Fig. 5. Joint angles produced for the right elbow joint. Range of motion permitted is filled in with red. [] Fig. 6. Right knee joint angle estimation with IMU EKF and Periodic-EKF compared to the marker based EKF for marching motion. The IMU based approaches are comparable in accuracy to the motion capture. Due to periodic nature of marching, the IMU Periodic-EKF converges to an accurate motion model and improves estimation. 4 Conclusions and Future Work In this paper, a comprehensive framework for online pose estimation was developed based on the Extended Kalman filter. The proposed approach models the human body as an articulated skeleton, and estimates the body position and orientation in space, together with the joint positions and velocities via fusion of a motion model and noisy measurements. Imperfect motion capture measurements, including missing and mislabeled markers, as well as IMU measurements, can be incorporated into the same framework. The proposed method is evaluated on a variety of datasets and demonstrates improved performance over state-of-the-art systems. References 1. Kulić, D., Venture, G., Yamane, K., Demircan, E., Mizuuchi, I., Mombaur, K.: Anthropomorphic movement analysis and synthesis: a survey of methods and applications. IEEE Trans. Robot. 32(4), 776–795 (2016)CrossRef 2. Motion analysis. http://​www.​motionanalysis.​com 3. Vicon. https://​www.​vicon.​com/​ 4. Aristidou, A., Lasenby, J.: Real-time marker prediction and CoR estimation in optical motion capture. Vis. Comput. 29(1), 7–26 (2013)CrossRef 5. Meyer, J., Kuderer, M., Müller, J., Burgard, W.: Online marker labeling for fully automatic skeleton tracking in optical motion capture. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 5652–5657. IEEE (2014) 6. Maycock, J., Rohlig, T., Schroder, M., Botsch, M., Ritter, H.: Fully automatic optical motion tracking using an inverse kinematics approach. In: 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), pp. 461–466 (2015) 7. Steinbring, J., Mandery, C., Pfaff, F., Faion, F., Asfour, T., Hanebeck, U.D.: Real-time whole-body human motion tracking based on unlabeled markers. In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (2016) 8. Roetenberg, D., Luinge, H., Slycke, P.: Xsens MVN: full 6DOF human motion tracking using miniature inertial sensors. Xsens Motion Technologies BV, Technical report (2009) 9. El-Gohary, M., McNames, J.: Shoulder and elbow joint angle tracking with inertial sensors. IEEE Trans. Biomed. Eng. 59(9), 2635–2641 (2012)CrossRef 10. Lin, J.F., Kulić, D.: Human pose recovery using wireless inertial measurement units. Physiol. Measur. 33(12), 2099–2115 (2012)CrossRef 11. Gupta, N., Hauser, R.: Kalman filtering with equality and inequality state constraints. arXiv preprint (2007). arXiv:​0709.​2791 12. Bonnet, V., Daune, G., Joukov, V., Dumas, R., Fraisse, P., Kulić, D., Seilles, A., Andary, S., Venture, G.: A constrained extended kalman filter for dynamically consistent inverse kinematics and inertial parameters identification. In: IEEE-RAS International Conference on Biomedical Robotics and Biomechatronics, pp. 952–957 (2016) 13. Joukov, V., Bonnet, V., Karg, M., Venture, G., Kulić, D.: Rhythmic EKF for pose estimation during gait. In: IEEE-RAS International Conference on Humanoid Robots, pp. 1167–1172 (2015) 14. Welch, G., Bishop, G.: An introduction to the kalman filter. Technical report TR 95–041, Department of Computer Science, University of North Carolina at Chapel Hill (2006) 15. Yamane, K., Nakamura, Y.: Natural motion animation through constraining and deconstraining at will. IEEE Trans. Visual. Comput. Graph. 9(3), 352–360 (2003)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_69 Influence of Emotional Motions in Human-Robot Interactions Magda Dubois¹, Josep-Arnau Claret², Luis Basañez² and Gentiane Venture¹   (1) Tokyo University of Agriculture and Technology, Tokyo, Japan (2) Institute of Industrial and Control Engineering, Technical University of Catalonia (UPC), BarcelonaTECH, Barcelona, Catalonia, Spain     Gentiane Venture Email: venture@cc.tuat.ac.jp Abstract The purpose of this study is to establish if emotional motions are important for the human perception of robots using proxemics as a tool. In this Human-Robot Interaction (HRI) experiment, participants were given instructions from a robot that was conveying either a sad, happy or neutral emotion. The emotional motions were generated as a low priority task using the robot Jacobian null-space. Participants were guided by the robot to sit at a desk to fill in a questionnaire and then to approach the robot to a distance that made them feel comfortable. A significant difference was found between the distance taken towards the robot in the Happy and the Sad conditions confirming our hypothesis that emotions conveyed by the robots influence how it is perceived. Keywords ProxemicsHRIEmotional motion 1 Introduction In order to have a successful human-Robot Interaction (HRI), which is a key issue in many scenarios [1], the robot must be seen as a living entity and its intentions recognized [2, 3]. Movement has been proven to be a sign of life [4] and, most importantly, can be used to convey emotions in a better way than face [5]). This is of great importance because, besides making the interaction more pleasant [6], humans respond to a robot according to their interpretation of its emotional state [5] and thus the robot motions can be used to improve the understanding of its intentions. In addition, discerning our interlocutors mental state increases the quality of the interaction and helps to anticipate future states [7]. For these reasons, this work focuses on the emotional motions of robots. It can be conjectured that such motions play an important role in HRI by influencing how humans perceive robots as social agents. 1.1 Purpose Our experiment had two main goals. The first was to see if conveying emotions in motions plays a role in the perception of robots by humans. Movements can make an object seem alive [4] but the purpose here was to investigate the role of the type of movement. Therefore a comparison was made between a motion with emotion (Sad or Happy) and one without (Neutral), in order to establish if a robot moving in a neutral way with mechanical movements is perceived by humans as a social entity or as a machine. The second was to evaluate if conveying different emotions had an impact on the interaction. Using a closed-ended questionnaire, it was found that the Happy and Sad emotions in the Aldebaran Pepper robot (a 121 cm high pseudo-humanoid robot) were well conveyed. Although humans can distinguish those two emotions, however, the aim has been to evaluate if the humans reaction to those emotions can also be measured at an unconscious level in a more natural interaction context and if they reacted unconsciously to those emotions as they would with a human. [] Fig. 1. Pepper’s emotional motions used during the task: the columns show the different motions and the rows show the different emotions that were generated for the experiment. 1.2 Proxemics as a Tool For the experiment a tool that fulfils the following conditions are necessary: C1 It must provide an indirect measure. C2 It should establish if the robot is perceived as human or machine. C3 It must establish the impact of the different emotions on the interaction. C4 It has to work in social interaction or in HRI. Proxemics is the study of space as an unconscious nonverbal language (C1 validated) and how humans manage it (Fig. 2). It gives an idea of the nature of the relationship between two entities. The figure also gives the horizontal distances as described by Hall [8]. Hall divided the human space in four zones: the intimate space, for embracing, touching or whispering; the personal space, for interactions among friends or family; the social space, for interactions among acquaintances; and the public space, for public speaking. Based on this and other research [9–11], it has been shown that humans approach other humans while keeping a certain distance, whereas they do not follow this rule with objects, as computers for example, this implies that a first interaction with a human will take place in the social space [11] while one with a computer will take place in the personal space. Knowing that there is a difference in the distance humans take towards these two entities, this allows us to distinguish between them (C2 validated). In [12], the approach towards a robot with a synthesized voice was bigger than towards a robot with a human voice due to fear of the unknown, which suggests that it can also be used to assess how a human reacted during an interaction (C3 validated). In [1, 13], proxemics was used to prove that a large minority of subjects did not perceive the robot as a social entity, as they approached it with a much closer distance than they would to a stranger (C4 validated). Therefore, proxemics is regarded as a tool to understand HRI. [] Fig. 2. Proxemics and experimental results 1.3 Hypothesis 1. H1: Participants approach the robot with a smaller distance in the Neutral condition than in the Happy or Sad one because a robot with mechanical movements is less human, thus participants should not respect it’s personal space.   2. H2: Participants approach the robot with a smaller distance in the Happy condition than in the Sad one because as during human-human interactions, the interpersonal distance is smaller with a likeable person.   2 Experimental Protocol Secondary Effect Minimization. In order to avoid bias and minimize experimental effects, two measure were undertaken. First, as voice has been proven to have an impact on the distance [12], Pepper was non talkative during the whole experiment. All instructions were given to the participants through motion and with neutral text displayed on the robot chest tablet. Second, to make the interaction more natural, participants were told to fill in a questionnaire on HRI but were not told about Pepper’s intervention. They didn’t know Pepper was part of the experiment. Robot Motion and Emotion Generation. During the interaction, the motions that changed according to the condition were: waving, walking (rolling) and pointing (Fig. 1 and Table 1). The three different conditions were Happy, Neutral and Sad. The Pleasure-Arousal-Dominance (PAD) space [22] was used to generate the emotional motions. An emotion is ultimately defined as a point in a three dimension PAD emotion space which domain is [$$[-1,1]^3$$]. Following the work of Glowinsky [23] and the correlation between dominance and a direct gaze found in several studies [24, 25], the values in each of the PAD dimensions are transformed to three kinematic features of the robot that can be controlled: jerkiness (J), activity (V, which captures how energetic the robot motion is and positively correlates with the extension of the arms and the overall velocity of the robot parts), and gaze directness towards the user (G). Thus, a new space, JVG, is defined such that the domain of each of its dimensions is [0, 1]. To transform a point from the PAD space to the kinematic features JVG space the next linear map was used: [$$ \begin{pmatrix} J \\ V \\ G \end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1-P\\ 1+A\\ 1+D \end{pmatrix} $$] The JVG point is then fed to a task priority inverse kinematic module as a secondary task [26]. With this approach, it is possible to execute a main priority task with the robot hands, and exploit the robot redundancy through the null-space decomposition of its Jacobian to execute an emotional motion. Previous results using this framework showed that, amongst four different emotions, Happiness and Sadness were well conveyed. The values of Table 1 were used for this experiment. To distinguish between the Neutral and Happy postures during waiting and movement, in addition to the JVG values, the head and torso inclination angles that had been found in our previous research [14] were used (the values were approached as much as physically allowed). Table 1. Kinematic and JVG values for different emotional conditions and different motions +---------+----------------------------------------+------------------------------------+---------------------+--------------------------------------------------------+ |   | Waiting motion | Walking motion | Waving motion | Pointing motion | +:========+:=======================================+:===================================+:====================+:=======================================================+ | Sad | JVG = (1;0;1) | Waiting + platform movement = slow | JVG = (1;0;1) | Behaviour “This\_9” in [21] | +---------+----------------------------------------+------------------------------------+---------------------+--------------------------------------------------------+ | Happy | JVG = (0;1;1) Head, torso [deg] = 0 | Waiting + platform movement = fast | JVG = (0;1;1) | Behaviour “This\_12” in [21] | +---------+----------------------------------------+------------------------------------+---------------------+--------------------------------------------------------+ | Neutral | Default position Head, torso [deg] < 0 | Waiting + platform movement = fast | JVG = (0.5;0.5;0.5) | Straight arm raised at 90[$${}^{\circ }$$] on the side | +---------+----------------------------------------+------------------------------------+---------------------+--------------------------------------------------------+ The code was executed with Ubuntu, using the ROS middleware Indigo distribution. The trajectories were generated offline using a C++ code, fed to a python script and further sent to Pepper as joint trajectories using Choregraphe Suite 2.1. [] Fig. 3. Experimental room layout: participants entered alone in the room, on Table 1 were sets of questionnaires (English and Japanese) and a poster “Please wait to be seated”. Pepper waved, approached, welcomed them and then pointed Table 2 to offer them to go sit there to fill in the questionnaire. 2.1 Study Setup Participants were asked to fill in a questionnaire. Before entering the room, they were told that they would be alone in the room, that all necessary instructions were inside and to let know the experimenter when they were had finish filling the questionnaire. When they opened the door, Pepper was in a stationary position (waiting motion) 4m away from the door (Fig. 3). Between the participant and Pepper, there was a table (“Table 1” in Fig. 3) with 2 sets of questionnaires (one in Japanese and one in English) and a poster written in the 2 languages with the text: “Please wait to be seated”. As soon as the participant closed the door behind him, the experimenter activated Peppers motion. Pepper waved to the participant (waving motion), approached them (walking motion) to a distance of 1.5 m, and looked at them. His tablet displayed “Welcome” for four seconds and he then moved backwards (walking motion) on one meter with and angle of thirty degrees. The tablet displayed “Please sit there” while indicating them the table (“Table 2” in Fig. 3) (pointing motion) where they could sit to fill in the questionnaire. After four seconds, the screen became black again. Pepper stood still (waiting motion) during the time when the participant filled in the questionnaire while in “basic awareness” mode (a module that makes the robot establishing and keeping eye contact with people). The whole experiment was viewed by the experimenter outside the room thanks to Pepper camera and at the end of the experiment the participant was asked to approach Pepper (waiting motion) to a distance that made him/her comfortable. Marks on the ground were drawn and the distance was measured after the participant left. 2.2 Measures The questionnaire used was constituted of: 1. 1. Basic demographics and information about previous exposure to robots.   2. 2. The Godspeed Questionnaire [17]: measures the users perception of the robot in terms of Anthropomorphism, Animacy, Likeability, Perceived Intelligence and Perceived Safety.   3. 3. The Big 5 Personality Test [18, 19, 21]: analyses the users personality in terms of Intellect and Imagination, Extraversion, Agreeableness, Conscientiousness and Emotional Stability.   4. 4. The NARS Questionnaire [20]: measures the negative attitudes towards robot in terms of Situations of Interaction with Robots, Social Influence of Robots and Emotions in Interaction with Robots. Most of the answers were given on a 5 point scale and all of it was translated to Japanese.   2.3 Participants Forty-three subjects participated, but six of them were excluded either because it was clear they did not understand the task or because of an error in the protocol. The thirty-seven remaining were all affiliated with Tokyo University of Agriculture and Technology (mean age = 23.3 years, SD = 3.6 years). The majority was Japanese ([$$50\%$$]) and there was a higher ratio of males ([$$60\%$$]). All participants that volunteered were told they could leave the experiment at any time. The Happy, Sad and Neutral conditions were randomly assigned to each participant. [] Fig. 4. Group mean values score on the GS questionnaire between happy and sad, only the mean in likeability was statistically significant (p = 0.04, mean difference = 0.441). [] Fig. 5. ANOVA of the 3 different groups. A significant difference in the analysis of variance was found. Therefore a set of Bonferroni corrected pairwise comparisons (Post-Hoc tests) were computed. 3 Results All tests were performed at a 0.05 significance level. Within each group, no outlier for the measure of the distance was found. A low negative correlation (the interpretation of the size of a correlation was made similarly to medical research [16]) between perceived stress and distance (Pearson’s r = −0.417, p = 0.01) was found. The participants were then split in three groups corresponding to Pepper’s emotion condition and the questionnaire mean scores were studied to verify if the emotions were well perceived (Fig. 4). Regarding H1, the mean scores between the three groups on the Anthropomorphism scores were analysed but no significant difference was found. For H2, a one-tailed Student’s t test was performed between the Happy and Sad groups and a significant difference was found on the Likeability score (p = 0.04, mean difference = 0.441). An analysis of variance (Fig. 5) was then performed between the groups for the measure of the distance and a significant difference was found (F(2,34) = 3.28 [$$\le $$] 4.94, p = 0.013). Therefore a set of Bonferroni corrected pairwise comparisons were computed, and a significant difference was found between the Happy and the Sad condition (p = 0.012, mean difference = −28.3 cm) as illustrated in Fig. 2. 4 Discussion The low negative correlation between the distance and the participant’s perceived stress towards Pepper was expected and confirms that the task was conducted effectively as well as the distance accurately measured. H1 was not supported but, as no difference between the groups in the perception of Pepper’s anthropomorphism (Fig. 4) was found, it might be explained by the fact that the neutral condition was not well conveyed. The first explanation is that it could be due only to Pepper’s physical aspect (he is very “cute” and clearly anthropomorphic). The second explanation is that no studies proves that Pepper’s default mode was completely neutral. Some emotion in his movements might already be implemented. Further research should investigate how the movements should be done to convey less anthropomorphism in order to test H1 again. It is also possible that the neutral condition was well conveyed but was simply not perceived by the participants because, as it is a humanoid robot, humans simply assume that it conveys emotions. If the motion lacks emotion they attribute the emotion according to the physical aspect, explaining why the likeability of the neutral condition is similar or even higher to the Happy one. Consistent with H2, the results indicated that participants interacting with a Sad Pepper maintained a greater interpersonal distance that the ones interacting with a Happy Pepper. So not only did the participants perceive Pepper differently but they interacted with it as they would with another human being. This can be an important feature in HRI taking into account that it can be modified simply by varying some parameters of the robot’s movements. Furthermore comparing Fig. 2 and the mean values of the distances taken during the Happy (mean = 109.2 cm, CI = [95.78, 122.6]) and the Sad condition (mean = 137.5 cm, CI = [124.54, 150.4]), it can be clearly seen that the distance taken towards a Happy Pepper falls in the distance interval humans take for interactions among good friends or family (personal distance), whereas the distance towards the Sad Pepper corresponds to the distance used for interactions among acquaintances (social distance). This supports the fact that emotional motions can be used to modulate HRI. Although this research has yielded positive results caution should be taken with respect to the interpretation of the results. Another possible explanation is based on previous research [15], where the authors demonstrated that gaze had differential effects on the physical distance humans kept between themselves and a robot, depending upon whether the humans had established a positive relation with the robot. If the person had not established a good relation with the robot (ex. they did not like the robot), the robot gaze behavior oriented toward the person increased the amount of distance the person maintained between the robot. In our experiment, during the interpersonal distance procedure, there is no eye contact. However the greater distance could be a consequence of Pepper’s gaze during the previous interaction. Additionally, because the Sad Pepper has a more hunched position than the Happy one, this could trigger the participants to stay backwards in order to see his face better (although either way there is no eye-contact because Pepper’s eyes are oriented towards the ground). It is also necessary to consider that the distances established by Hall may differ with different cultures, therefore we can not fully validate the results comparing the experiments values to the different spaces of the proxemics. Therefore future research should investigate the relationship between emotional motions and the interpersonal distance with various robots. In order to investigate H1, less humanoid robots should be used, and conveying low anthropomorphism through neutral motions should be studied. To confirm H2, taller robots should be used to avoid people stepping backwards trying to establish eye contact. Further studies with different motions and emotions could also be of interest. Acknowledgement This research is supported by the JSPS Challenging Exploratory Research Grant 15K12124, and partially supported by the Spanish MINECO project DPI2014-57757-R and the Spanish predoctoral grant BES-2012-054899. References 1. Barnes, L., Fincannon, T., Murphy, R., Riddle, D.R.: Evidence of the need for social intelligence in rescue robots. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1089–1095 (2004) 2. Fiore, S.M., Wiltshire, T.J., Lobato, E.J.C., Jentsch, F.G., Huang, W.H., Axelrod, B.: Toward understanding social cues and signals in humanrobot interaction: effects of robot gaze and proxemic behavior. Front. Psychol. 4(859), 1–15 (2013) 3. Breazeal, C.: Social interactions in HRI: the robot view. IEEE Trans. Syst. Man Cybern. Part C 34(2), 181–186 (2004)CrossRef 4. Mori, M.: The uncanny valley. Energy 7(4), 33–35 (1970) 5. Bethel, C.L., Murphy, R.R.: Survey of non-facial and non-verbal affective expressions for appearance-constrained robots. IEEE Trans. Syst. Man Cybern. Part C 38, 83–92 (2008)CrossRef 6. Moshkina, L., Arkin, R.C.: Human perspective on affective robotic behavior: a longitudinal study. In: IEEE International Conference on Intelligent Robots and Systems, Edmonton, Canada (2005) 7. Przyrembel, M., Smallwood, J., Pauen, M., Singer, T.: Illuminating the dark matter of social neuroscience: considering the problem of social interaction from philosophical, psychological, and neuroscientific perspectives. Front. Hum. Neurosci. 6, 190 (2012)CrossRef 8. Hall, E.T.: The Hidden Dimension. Doubleday, New York (1966) 9. Kenneth, B.: Little, personal space. J. Exp. Soc. Psychol. 1, 237–247 (1965)CrossRef 10. Knapp, M.L., Hall, J., Horgan, T.: Nonverbal Communication in Human Interaction. Wadsworth Publishing, New York (2013) 11. Baumeister, R.F., Bushman, B.J.: Social Psychology and Human Nature. Wadsworth Publishing, Belmont (2008) 12. Walters, M.L., Koay, K.L., Dautenhahn, K., te Boekhorst, R., Syrdal, D.S.: Human approach distances to a mechanical-looking robot with different robot voice styles. In: 17th IEEE International Workshop on Robot and Human Interactive Communication, pp. 707–712, Munich (2008) 13. Walters, M.L., Koay, K.L., Dautenhahn, K., te Boekhorst, R.: The influence of subjects personality traits on personal spatial zones in a human-robot interaction experiment. In: IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN (2005) 14. Izui, T., Milleville, I., Sakka, S., Venture, G.: Expressing emotions using gait of humanoid robot. In: 4th IEEE International Symposium on Robot and Human Interactive Communication, Kobe (2015) 15. Mumm, J., Mutlu, B.: HumanRobot Proxemics: physical and psychological distancing in humanrobot interaction. In: 6th International Conference on HumanRobot Interaction, Lausanne, Switzerland (2011) 16. Mukaka, M.M.: Statistics corner: a guide to appropriate use of correlation coefficient in medical research. Malawi Med. J. 24, 69–71 (2012) 17. Bartneck, C., Croft, E., Kulic, D.: Measuring the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. In: Metrics for HRI Workshop, Technical report, vol. 471, pp. 37–44 (2008) 18. McCrae, R.R., John, O.P.: An introduction to the five-factor model and its applications. J. Pers. 60(2), 175–215 (1992)CrossRef 19. Goldberg, L.R.: The development of markers for the big-five factor structure. Psychol. Assess. 4, 26–42 (1992)CrossRef 20. Nomura, T., Shintani, T., Fujii, K., Hokabe, K.: Experimental investigation of relationships between anxiety, negative attitudes, and allowable distance of robots. In: International Conference on Human-Computer Interaction, pp. 13–18 (2007) 21. Chin, S.: Animations Library (2014). https://​github.​com/​NightHacking/​NaoHacking/​blob/​master/​Animations%20​Library/​AnimationsLibrar​y.​cbl 22. Mehrabian, A.: Pleasure-arousal-dominance: a general framework for describing and measuring individual differences in Temperament. Current Psychol. 14(4), 261–292 (1996). SpringerMathSciNetCrossRef 23. Glowinski, D., Dael, N., Camurri, A., Volpe, G., Mortillaro, M., Scherer, K.: Towards a minimal representation of affective gestures. IEEE Trans. Affect. Comput. 2, 106–118 (2011)CrossRef 24. Adams Jr., B.R., Kleck, R.E.: Effects of direct and averted gaze on the perception of facially communicated emotion. Emotion 5, 3–11 (2005)CrossRef 25. Carney, D.R., Hall, J.A., LeBeau, L.S.: Beliegs about the nonverbal expression of social power. J. Nonverbal Behav. 29, 105–123 (2005)CrossRef 26. Chiaverini, S., Oriolo, G., Walker, I.D.: Springer Handbook of Robotics. Springer-Verlag New York, Inc., New York (2007). Chap. 11 © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_70 Energy Based Control for Safe Human-Robot Physical Interaction Anis Meguenani¹  , Vincent Padois¹  , Jimmy Da Silva¹  , Antoine Hoarau¹   and Philippe Bidaud¹   (1) Institut des Systèmes Intelligents et de Robotique, Sorbonne Universités, UPMC Univ. Paris 06, CNRS UMR 7222, 75005 Paris, France     Anis Meguenani (Corresponding author) Email: meguenani@isir.upmc.fr   Vincent Padois (Corresponding author) Email: padois@isir.upmc.fr   Jimmy Da Silva (Corresponding author) Email: dasilva@isir.upmc.fr   Antoine Hoarau (Corresponding author) Email: hoarau@isir.upmc.fr   Philippe Bidaud (Corresponding author) Email: bidaud@isir.upmc.fr Abstract In this paper, we propose physically meaningful energy related safety indicators for robots sharing their workspace with humans. Based on these indicators, safety criteria are introduced as constraints in the control algorithm. The first constraint is placed on the kinetic energy of the robotic system to limit the amount of dissipated energy in case of collision. This constraint depends on the distance between the robot and the human operator. The distance is computed with a point cloud based algorithm acquired using a set of depth sensors (Kinects). The second constraint is on the amount of potential energy that is allowed to be generated within the human-robot system during physical contact. It is used to modulate the contact forces. The control algorithm is formulated as an optimization problem and computes every time step the actuation torques for a KUKA LWR4 manipulator given some task to be performed, the introduced constraints and the physical limitations of the system to respect. The overall framework allows a human operator to safely enter the robot’s workspace and physically interact with it. Keywords SafetyHuman-robot interactionConstraints compatibilityEnergyQP 1 Introduction Domains of application for robots are evolving from a purely structured industrial context to the human world as intervention machines and assistants to aid a person in the completion of a manual task. Safety is therefore of most importance. This has a direct impact on the formulation of the control problem that must be completely reconsidered. When industrial robots are aimed for tasks that are relatively simple e.g. pick and place manipulation, repetitive, within perfectly known static and protected environments; Intervention and service robotic systems are confronted to more challenging scenarios: unknown, constrained and dynamic environments and possible deliberate/undeliberate interactions with humans. To ensure safe human-robot interactions, several approaches have been explored in the robotics literature. At the hardware level, the mechanical design have been optimized by including torque sensing at the joint level. This provides a way to actively control the impedance of the robot. The Kuka-DLR lightweight robot [1] has been specifically designed for these purposes. On the software level, different control approaches using intrinsic and extrinsic force/torque sensors have been developed to handle safety during pre and post impact/contact phases [2]. Haddadin in [3] and De Luca in [4] present different strategies to reduce the effect of non deliberate impacts. A collision detection parameter based on the sensed external torque is introduced and used to scale down the link inertia obtaining a “lighter” robot that “flees” from the collision area. Heizmann and Zelinsky in [5] propose a safety criterion based on the potential impact force to filter the control torque of the system. [] Fig. 1. View of a user sharing its workspace with a KUKA LWR4 manipulator; with the experimental setup used to detect the human operator and control the robotic system. During human-robot interaction, the degree of danger towards the person is mainly caused by two parameters: the impact force created at the collision instant and the contact forces existing after the establishment of physical contact. The most generic way to include and express these parameters is to use an energetic formulation. Indeed, energy is a universal entity that can describe all the physical phenomena occurring during human-robot interaction. Energy has already been discussed in [3, 6] as a good representation of the risk of injury. It is used in this work to synthesize indicators whose value is related to both impact and contact forces and that can be expressed using the control input. Safety criteria, namely a bound on the maximum value of the safety indicators, is then derived. Kinetic and potential energy based criteria are used to constrain the dynamic behaviour of a KUKA LWR4 serial robot during the interaction with a human operator. The present paper is the continuation of our previously published work [7]. It is organised as follows. In Sect. 2, the proposed safety indicators and associated safety criteria are formulated. In Sect. 3, the controller is derived: task’s related objectives are formulated and the expression of the inequality constraints acting on the system is provided. In Sect. 4, an experimental scenario is introduced based on which the possibilities offered by the proposed controller are illustrated and discussed. Finally, Sect. 5 summarizes the contribution and provides an overview of the future work. 2 Interaction Forces and the Safety Indicators 2.1 Impact Force The generated impact force at collision can be written as a function of the dissipated energy and the shock absorption distance: [$$\begin{aligned} \int \_u F\_{impact} du = E\_{dissipated} = E\_{c}^{hum} + E\_{c}^{rob}, \end{aligned}$$] (1) [$$ F\_{impact} $$] is the generated impact force during the collision, u the shock absorption distance and [$$E\_{dissipated}$$] the dissipated energy which is equal to the sum of the kinetic energy [$$E\_{c}$$] of both the human operator and the robot. At a given time, very few assumptions can be made on the state of energy of the human operator. As a consequence, the retained safety indicator [$$S\_c$$] is robot-centred. [$$E\_{c}^{rob}$$] is directly related to the impact force and can be expressed using the actuation torque. It is therefore to be considered for the formulation of the first safety indicator: [$$\begin{aligned} \begin{aligned} S\_c = E\_{C}^{i,j} = \frac{1}{2} m({\varvec{q}})\_{i,j}^{eq} v\_{i,j}^2 \end{aligned} \end{aligned}$$] (2) With [$$1/m({\varvec{q}})\_{i,j}^{eq} = J({\varvec{q}})\_{C}^{i,j} M({\varvec{q}})^{-1} J({\varvec{q}})\_{C}^{{i,j}^T}$$]. [$$m({\varvec{q}})\_{i,j}^{eq}$$] is the equivalent mass of the robot segment i in the direction of obstacle j expressed in the cartesian space. [$$M({\varvec{q}})$$] is the joint space inertia matrix of the robot and [$${\varvec{q}}$$] its joint space configuration. [$$v\_{i,j} = J({\varvec{q}})\_{C}^{i,j} \dot{{\varvec{q}}}$$] is the relative velocity of the closest point C belonging to the robot segment i in the direction of obstacle¹ j. [$$J({\varvec{q}})\_{C}^{i,j}$$] is the Jacobian of the robot segment i expressed at point C and projected along the distance vector towards obstacle j. 2.2 Contact Force After the establishment of physical contact, contact forces are created as a consequence of the potential energy generated within the human-robot system. The force [$${\varvec{F}}\_{C|k}$$] driving the contact point at each time step k in the direction of the desired position (trajectory tracking task) is derived from the potential energy [$$E\_{p|k}$$]: [$$\begin{aligned} {\varvec{F}}\_{C|k} = -{\varvec{\nabla }} E\_{p|k} \end{aligned}$$] (3) Thus: [$$\begin{aligned} E\_{p|k} = -\int \_{{\varvec{x}}\_{C|k}}^{{\varvec{x}}\_{C}^{\*}} {\varvec{F}}\_{C|k} d{\varvec{x}} = -{\varvec{F}}\_{C|k} \left\| {\varvec{X}}\_{C|k} - {\varvec{X}}\_{C}^{\*} \right\| \_{C,\*} \end{aligned}$$] (4) With: [$$\begin{aligned} \begin{aligned} {\varvec{F}}\_{C|k} = m({\varvec{q}})\_{C,\*}^{eq} {\varvec{\ddot{X}}}\_{C|k}^{C,\*} \end{aligned} \end{aligned}$$] (5) [$$C,\*$$] represents the directing vector between the contact point C (on the considered segment i) and its desired position [$$\*$$]. [$${\varvec{\ddot{X}}}\_{C|k}^{C,\*} = \dot{J}({\varvec{q}})\_{C}^{C,\*} {\varvec{\dot{q}}}\_{|k} + J({\varvec{q}})\_{C}^{C,\*} {\varvec{\ddot{q}}}\_{|k}$$] is the cartesian acceleration of the contact C along the [$$C,\*$$] vector. [$$E\_{p|k}$$] is directly related to the contact forces and can be expressed using the actuation parameters (torque). It is therefore used for the formulation of the safety indicator during the physical contact phase. The retained safety indicator [$$S\_p$$] is robot centred and expressed as following: [$$\begin{aligned} S\_p = E\_{p|k} = -{\varvec{F}}\_{C|k} \left\| {\varvec{X}}\_{C|k} - {\varvec{X}}\_{C|k}^{\*}\right\| \_{C,\*} \end{aligned}$$] (6) Within the framework of this work, the only mobile body for which [$$S\_c$$] is considered is the robot’s end-effector. Indeed, it is the last segment of the fixed base serial robot (KUKA LWR4) that holds the practical load and consequently deploys the maximum energy (kinetic and potential). The only considered obstacle is the human operator. [$$\begin{aligned} \begin{aligned} S\_c = E\_{c}^{EE,O} = \frac{1}{2} m({\varvec{q}})\_{EE,O}^{eq} v\_{EE,O}^2 \end{aligned} \end{aligned}$$] (7) [$$\begin{aligned} \begin{aligned} S\_p = E\_{p|k}^{EE,\*} = - m({\varvec{q}})\_{EE,\*}^{eq} {\varvec{\ddot{X}}}\_{C|k}^{EE,\*} \left\| {\varvec{X}}\_{C|k} - {\varvec{X}}\_{C|k}^{\*} \right\| \_{EE,\*} \end{aligned} \end{aligned}$$] (8) 2.3 Safety Limit Values Pre Contact Establishment: For [$$S\_c$$], the safety criterion represents the maximum amount [$$E\_{c\_{limit}}$$] of kinetic energy allowed to be dissipated during a human-robot impact. To prevent over limiting the dynamic of the system, the idea is as following: When the human operator is far from the robot, the system can be as dynamic as possible to accomplish its main task (maximum kinetic energy [$$E\_{c\_{max}}$$] allowed). As the human operator starts walking towards the robot, a constraint [$$E\_{c\_{limit}}$$] depending on the distance between the robot’s end-effector and the person is placed on the kinetic energy of the machine. The system is forced into a safe dynamic behaviour. At this time, if any physical contact is engaged, the resulting impact force will be harmless (see Fig. 3(a)). [$$\begin{aligned} S\_c \le E\_{c\_{limit}} = E\_{c\_{safe}} + f(d) \end{aligned}$$] (9) f(d) is a weighting function depending on the distance d between the end-effector and the human operator. f is chosen to be linear and is written: [$$\begin{aligned} f(d) = K (d - d\_{safe}). \end{aligned}$$] (10) K represents the equivalent breaking force applied on the end-effector in the opposite direction of the obstacle. More details about this parameter can be found in [7]. Given the global objectives of this work, an average value of K ([$${>}0$$]) is considered all over the workspace of the robot. Post Contact Establishment: For [$$S\_p$$], the safety criterion represents the maximum amount [$$E\_{p\_{limit}}$$] of potential energy allowed to be stored at time step k within the human-robot system during a physical contact phase. The value of [$$E\_{p\_{limit}}$$] depends on several aspects: The desired degree of passivity of the robotic system, the maximum allowed contact force, if a spring-damper like behaviour is preferred and more importantly the degree of danger in case physical contact is lost. Indeed, when contact breaks, the stored potential energy [$$E\_{p\_{limit}}$$] is to be transformed into kinetic energy. In case of an other collision, the resulting impact force [$$F\_{impact}$$] should not cause any damage. Therefore, the maximum value acceptable for [$$E\_{p\_{limit}} = E\_{p\_{safe}}$$] is [$$E\_{c\_{safe}}$$]: [$$\begin{aligned} S\_p \le E\_{p\_{limit}} = E\_{p\_{safe}} \end{aligned}$$] (11) 3 Safe Dynamic Controller In this section a dynamic control strategy that ensures safety for the human operator is proposed. The objective is to compute the control torque [$$\varvec{\tau }$$] in order to perform a trajectory tracking task while respecting a number of constraints at every time-step: - Respect the introduced safety criteria to prevent harmful collisions and contact forces, - Respect the physical limits of the system. 3.1 Task Formulation In this work, a trajectory tracking performance is considered. A cartesian acceleration task is defined as an error between the expected acceleration [$${\varvec{\ddot{X}}}^c$$] and the real acceleration [$${\varvec{\ddot{X}}}$$] of the robot’s end-effector. [$${\varvec{\ddot{X}}} = J({\varvec{q}}) {\varvec{\ddot{q}}} + \dot{J}({\varvec{q}}) {\varvec{\dot{q}}}$$] (where [$$J({\varvec{q}})$$] is the Jacobian of the end-effector). The acceleration task function to be minimized is written: [$$\begin{aligned} {\varvec{g}}\left( \varvec{\tau },{\varvec{\ddot{X}}}^c\right) = {\varvec{\ddot{X}}}^c - \left( J({\varvec{q}}) M({\varvec{q}})^{-1} \left( \varvec{\tau } - {\varvec{b}}({\varvec{q}},{\varvec{\dot{q}}}) \right) + \dot{J}({\varvec{q}}) {\varvec{\dot{q}}}\right) . \end{aligned}$$] (12) [$${\varvec{b}}({\varvec{q}},{\varvec{\dot{q}}})$$] are the non linear terms of the equation of motion, namely gravity, Coriolis and centrifugal induced generalized forces. [$${\varvec{\ddot{X}}}^c$$] is computed with a PD controller and a feed-forward term in order to track a desired trajectory [$${\varvec{X}}(t)^\star $$]. 3.2 Constraints Formulation In addition to the linear constraint corresponding to the dynamic model: [$$\begin{aligned} M(q) {\varvec{\ddot{q}}}\_{|k} + {\varvec{b}}({\varvec{q}},{\varvec{\dot{q}}}) = \varvec{\tau }\_{|k} + \varvec{\tau }\_{ext} \end{aligned}$$] (13) the physical limitations of the robotic system must be accounted for when solving the control problem. The actuators limitations are considered at the following levels: [$${\varvec{q}}$$], [$${\varvec{\dot{q}}}$$] and [$${\varvec{\ddot{q}}}$$] and expressed as a function of the control variable [$$\varvec{\tau }\_{|k}$$]: [$$\begin{aligned} \left\{ \begin{array}{lcl} {\varvec{q}}\_{m} \le {\varvec{q}}\_{|k+1} = {\varvec{q}}\_{|k} + {\varvec{\dot{q}}}\_{|k} dt+ \frac{1}{2} {\varvec{\ddot{q}}}\_{|k} dt^2 \le {\varvec{q}}\_{M}, \\ {\varvec{\dot{q}}}\_{m} \le {\varvec{\dot{q}}}\_{|k+1} = {\varvec{\dot{q}}}\_{|k} + {\varvec{\ddot{q}}}\_{|k} dt \le {\varvec{\dot{q}}}\_{M}, \\ \varvec{\tau }\_{m} \le \varvec{\tau }\_{|k} \le \varvec{\tau }\_{M}, \end{array}\right. \end{aligned}$$] (14) [$${\varvec{q}}\_{|m,M}$$], [$${\varvec{\dot{q}}}\_{|m,M}$$] and [$$\varvec{\tau }\_{m,M}$$] are respectively the maximum/minimum feasible position, velocity and torques. To avoid high pick of torques and chattering phenomena [8], during experimentation, dt is fixed at [$$5 \cdot \delta t$$]. [$$\delta t$$] is the control time step. In an equivalent way, the safety indicators [$$S\_{c}$$] and [$$S\_{p}$$] can be expressed as a function of the control variable [$${\varvec{\ddot{q}}}\_{|k}$$]: [$$\begin{aligned} S\_c = E\_{c|k+1}^{EE,O} = \frac{1}{2} m({\varvec{q}})\_{EE,O}^{eq} v\_{EE,O|k+1}^2 \le E\_{c\_{limit}} = E\_{c\_{safe}} + f(d) \end{aligned}$$] (15) With [$$v\_{EE,O|k+1} = J({\varvec{q}})\_{C}^{EE,O} {\varvec{\dot{q}}}\_{|k+1}$$] and [$${\varvec{\dot{q}}}\_{|k+1} = {\varvec{\dot{q}}}\_{|k} + {\varvec{\ddot{q}}}\_{|k} \delta t$$]. [$$\begin{aligned} S\_{p} = E\_{p|k+1}^{EE,\alpha } = - m({\varvec{q}})\_{EE,\alpha }^{eq} {\varvec{\ddot{X}}}\_{C|k+1}^{EE,\alpha } \left\| {\varvec{X}}\_{C|k} - {\varvec{X}}\_{C|k}^{\*} \right\| \_{EE,\alpha } \le E\_{p\_{limit}} \end{aligned}$$] (16) [$$\alpha $$] represents the x, y and z directions in the cartesian space. [$${\varvec{\ddot{X}}}\_{C|k+1}^{EE,\alpha } = \dot{J}({\varvec{q}})\_{C}^{EE,\alpha } {\varvec{\dot{q}}}\_{|k+1} + J({\varvec{q}})\_{C}^{EE,\alpha } {\varvec{\ddot{q}}}\_{|k+1}$$] is the cartesian acceleration of the end-effector along the [$$\alpha $$] direction. 3.3 Controller Formulation The control torque is computed by minimizing the norm of the cartesian acceleration task function expressed in the following quadratic form: [] (17) Subject to (14) and (15) and (16). [$$\varvec{\tau }$$] and [$${\varvec{\ddot{q}}}$$] are the optimization variables. 4 Experimental Results In this section, the experimental setup of the KUKA LWR4 serial robot and the vision system used to detect the human operator are described. A test case scenario is presented and behaviours that can be induced using the presented controller and constraints are discussed. 4.1 Experimental Setup The distance between the robot and the human operator is computed using data from a set of 3 Kinects strategically placed around the workspace of the robot to avoid occlusions (see Fig. 1). RGB and depth images from each sensor are calibrated and the pose of each device in the robot’s base frame is computed. The robot and background are removed [9] from the depth images then the related pointclouds are down sampled and combined together. Finally the cluster of the human operator is extracted from the resulting pointcloud [10] and the minimum distance between the robot end-effector and the human operator is computed and published via a ROS topic. The controller described in Sect. 3 is implemented as a C++ OROCOS [11] component inside a generic software architecture developed at ISIR for robot manipulators [12]. The remote control PC runs a Xenomai [13] kernel with RTnet [14] to ensure minimum jitter in the real-time Ethernet communication. Finally, the communication with the Kuka Robot Controller (KRC) is performed via the Fast Research Interface (FRI) [15]. 4.2 Test Case Scenario As a main activity, the robot performs a repetitive movement where it tracks a desired position on a straight line between the points [] and [] in the cartesian space (see Fig. 1). The controller described by (17) is implemented only with the linear constraints on the physical limitations of the system (13) and (14). The QP problem is solved at every time-step [$$\delta t = 15$$] ms to compute the needed control torque. The QP is solved in real time using Gurobi, a commercial optimization software. [] Fig. 2. (a) Kinetic energy of the robot’s end-effector in the direction of the human operator; (b) Potential energy within the robot-human system during physical contact. (c) Top: position tracking error; Middle: constraint of the articular acceleration of the first joint; Bottom: articular velocity of the first joint. (d) Articular torques The maximum position tracking error in the cartesian space (see (c) in Fig. 2) is around 0.051 m, this is mainly due to the activation of the the articular velocity constraint (14) on the robot’s first joint. The maximum/minimum limits² on the articular velocity for the first joint are reached and violated. This is mainly caused by choosing [$$dt = 5 \cdot \delta t$$] in (14). The reason for this choice and further explanations can be found in [8]. According to Fig. 2, before the collision with the human operator³, the maximum reached velocity of the robot end-effector in the cartesian space is about 2 m/s and the maximum kinetic energy in the direction of the human operator is 2.8 J. At the collision instant, 2 J of kinetic energy are instantaneously dissipated to create the resulting impact force. After the establishment of physical contact, potential energy within the human-robot system increases to reach a maximum value of 14 J (see Fig. 2(b)). Consequently contact forces are created driving the blocked robot towards its desired position. The related torques can be seen in Fig. 2(d). Notice [$$\tau \_0 \simeq -18$$] N.m and [$$\tau \_2 = \simeq -16$$] N.m. Once the physical contact released, the previously charged potential energy is transformed into kinetic energy as fast as possible and the robot goes back to its normal behaviour. 4.3 Constraints on Kinetic and Potential Energy In this scenario⁴, the constraint (15) depending on the distance between the robot and the human operator is placed on the kinetic energy expressed at the robot’s end-effector in the direction of the human operator. After the establishment of physical contact, the constraint (16) on the amount of potential energy allowed to be generated within the human-robot system is also activated. The controller parameters are chosen as following: [$$E\_{safe} = 0.02$$] J, [$$K = 0.4$$] N.m, [$$d\_{safe} = 0.3$$] m and [$$d\_{max} = 7\,$$]m. From the kinetic energy profile in Fig. 3, the constraint on the kinetic energy of the robot is respected during the whole interaction’s time. At the collision instant, comparing to the previous scenario, only 0.02 J of kinetic energy are dissipated; This results into a smaller impact force. [] Fig. 3. (a) Evolution of the kinetic energy constraint depending on the distance d between the robot and the human operator. (b) Constrained Kinetic energy of the robot end-effector in the direction of the human operator; (c) Articular torques; (d) Potential energy within the human-robot system during physical contact. After the establishment of physical contact, the constraint on the amount of potential energy allowed to be stored within the human-robot system is also respected at every time step (see Fig. 3(d)). In this case [$$E\_{p\_{safe}}^{x} = 0.0$$] J, [$$E\_{p\_{safe}}^{y} = 0.0$$] J and [$$E\_{p\_{safe}}^{z} = 0.0$$] J. This results in smaller contact forces. The corresponding articular torques (see Fig. 3(c)) are much lower than during the contact phase of the previous scenario: [$$\tau \_0 \simeq -4$$] N.m and [$$\tau \_2 = \simeq -3$$] N.m. 5 Conclusion and Future Work Using the presented control framework and the introduced energy based criteria, the robot has been proven capable to adapt to the human operator so physical contact can be established without any damage. The impact force is reduced by constraining the kinetic energy of the robot and the contact force is modulated by constraining the amount of potential energy generated during physical contact. In the presented experiments we have been able to ensure the respect at every time step of the constraints on potential and kinetic energy. The only way we have been able to implement these constraints is by controlling the system at a time step [$$\delta t = 15$$] ms. Which gives the system sufficient time to brake and cope with these dynamic constraints. However, a time-step of 1 ms is still needed for better performances in the accomplishment of the trajectory tracking task. References 1. Bischoff, R., Kurth, J., Schreiber, G., Koeppe, R., Albu-Schäffer, A., Beyer, A., Eiberger, O., Haddadin, S., Stemmer, A., Grunwald, G., et al.: The kuka-dlr lightweight robot arm-a new reference platform for robotics research and manufacturing. In: 41st International Symposium on Robotics, pp. 1–8 (2010) 2. Ebert, D.M., Henrich, D.D.: Safe human-robot-cooperation: image-based collision detection for industrial robots. In: IEEE/RSJ International Conference On Intelligent Robots and Systems, pp. 1826–1831 (2002) 3. Haddadin, S., Albu-Schäffer, A., De Luca, A., Hirzinger, G.: Collision detection, reaction: a contribution to safe physical human-robot interaction. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3356–3363 (2008) 4. De Luca, A., Albu-Schäffer, A., Haddadin, S., Hirzinger, G.: Collision detection and safe reaction with the DLR-III lightweight manipulator arm. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1623–1630 (2006) 5. Heinzmann, J., Zelinsky, A.: Quantitative safety guarantees for physical human-robot interaction. Intl. J. Robot. Res. 22(7–8), 479–504 (2003)CrossRef 6. Haddadin, S., Khoury, A., Rokahr, T., Parusel, S., Burgkart, R., Bicchi, A., Albu-Schäffer, A.: A truly safely moving robot has to know what injury it may cause. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5406–5413 (2012) 7. Meguenani, A., Padois, V., Bidaud, P.: Control of robots sharing their workspace with humans: an energetic approach to safety. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4678–4684. IEEE (2015) 8. Park, K.C., Chang, P.H., Kim, S.H.: The enhanced compact QP method for redundant manipulators using practical inequality constraints. In: 1998 IEEE International Conference on Robotics and Automation, 1998, Proceedings, vol. 1, pp. 107–114. IEEE (1998) 9. KaewTraKulPong, P., Bowden, R.: An improved adaptive background mixture model for real-time tracking with shadow detection. In: Video-Based Surveillance Systems, pp. 135–144. Springer, New York (2002) 10. Rusu, R.B., Cousins, S.: 3D is here: Point Cloud Library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011 11. Soetens, P.: RTT: Real-Time Toolkit. http://​www.​orocos.​org/​rtt 12. Hoarau, I., Da Silva, J., Padois, V.: rtt-kuka-lwr: control software architecture for lightweight robots. https://​www.​github.​com/​kuka-isir/​rtt\_​lwr 13. Chanteperdrix, G.: Xenomai. https://​xenomai.​org 14. Rtnet: Hard real-time networking for real-time linux. http://​rtnet.​org. (M. et al.) 15. Schreiber, G., Stemmer, A., Bischoff, R.: The fast research interface for the kuka lightweight robot. In: IEEE International Conference on Robotics and Automation (ICRA), Citeseer (2010) 16. Kuka: Hr no constraints. http://​pages.​isir.​upmc.​fr/​~padois/​website/​fichiers/​videos/​Kuka\_​Human\_​interaction\_​no\_​constraints.​mp4. (M. et al.) 17. Kuka: Hr with constraints. http://​pages.​isir.​upmc.​fr/​~padois/​website/​fichiers/​videos/​Kuka\_​Human\_​interaction\_​constraints\_​on\_​Ec\_​and\_​Ep.​mp4. (M. et al.) Footnotes 1 All along the paper, “obstacle” is used as a generic term for any external element of the environment, e.g. a human operator.   2 The maximum/minimum limits on the articular velocity of the first joint are fixed in the QP at lower values than the real capacities of the robot.   3 see video in [16].   4 see video in [17].   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_71 Psychological Evaluation on Influence of Appearance and Synchronizing Operation of Android Robot Kaori Tanaka¹, Masahiro Yoshikawa², Yujin Wakita³, Yoshio Matsumoto³   and Kazuhito Yokoi⁴ (1) Denso Corporation, Kariya, Japan (2) Department of Robotics, Osaka Institute of Technology, Osaka, Japan (3) Robot Innovation Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan (4) Intelligent Systems Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan     Yoshio Matsumoto Email: yoshio.matsumoto@aist.go.jp Abstract This paper describes the influence of the appearance and synchronizing operation of android to the extension of “personal space” from an operator to the android. We proposed a hypothesis that the extension of personal space from an operator to the android is affected by their agency and sympathy to the android. To verify this hypothesis, we conducted an experiment (1) in which an experimenter approached the android robots and unhuman-like robot to the intimate distance after male/female participant became familiar with the synchronized operation. The SCR (Skin Conductance Response) signal of the operator was measured as an indication of psychological or physiological arousal of the operator. The fluctuation of SCR was observed as reactions to stimuli of another person closely approaching the same gender android that is controlled to be synchronizing with the operator. As a result, the SCR in the android synchronous motion condition was significantly larger than the android/unhuman-like robot no motion condition. 1 Introduction 1.1 Background Communication robots that coexist with human are known to give various mental or psychological effects on human partners, which should be considered to achieve smooth human-robot communication. In 1970, Mori proposed the “uncanny valley” phenomenon, which assumes that when appearance of a robot is getting similar to human, familiarity of a robot is getting higher in general, but it drastically decreases at some point very close to human [1]. Since then, human perception on humanoid robots is actively investigated. Kamide et al. reported that both the appearance and the behavior of humanoid robots determine the degree of likability by humans [2]. Nakano et al. reported that the impression of robot can be improved by adding head and arm movements [3]. It is also reported that human-like behaviors such as smiling [4] and eye-contact [5] could improve the impression of communication robots. Hofree et al. reported that human subjects unconsciously imitates the facial expression of the humanoid “Einstein” which can be regarded as the sympathy of human toward the robot [6]. Recently, brain activities measurement such as fMRI are also utilized to analyze how human perceives robots [7, 8]. We have developed a highly human-like android robot which can be operated in synchronization with human motions. In many field experiments and demonstrations, we found that the operator often feels as if he/she is approached when other people approach the android [9]. In order to investigate the phenomenon, we conducted an experiment where human subject operated the head motion of the android by synchronizing it to human head motion. Then, it was proved that the subject had mental response when experimenter approached the android by physiological signal (SCR) and subjective measurement (questionnaires). The response was stronger when the android was synchronized with subject than the response when the android was not synchronized. In the previous experiment, we hypothesized that the “personal space” surrounding the subject is extended to the area surrounding the android by synchronous motion. Therefore we only utilized the android with male appearance, and compared the response in synchronous and asynchronous condition. In this paper, further investigation on the effect of the appearance is described to indicate how the similarity of the appearance to the human subject is important in addition to the synchronizing motion. 1.2 Hypothesis In this paper, we hypothesize that the operator’s personal space could also be extended to the area around the android by the operation in synchronization with the subject, and the factors which induce the extension are as follows: 1. (1) Sympathy caused by human-like appearance of the android robot,   2. (2) Sense of agency caused by synchronizing motion of the android robot with the operator.   The hypothesized mechanism for the extension is shown in Fig. 1. [] Fig. 1. Hypothesis The personal space is the area surrounding individuals into which intrusion by others causes discomfort and strangeness. Hall classified it into four areas depending on the size: public, social, personal, and intimate [10]. Characteristics of the personal space have been assessed based on subjective reporting and physiological response such as skin conductance response (SCR) [11]. Sympathy is the ability to understand and share another person’s experiences and emotions [12]. Mirror neurons are regarded to play important role in the perception of sympathy, which fires both when a person acts and when the person observes the same action performed by another person [13]. Therefore, we hypothesize that android robot which resemble human appearance can activate mirror neuron which leads to the sympathy. Sense of agency refers to the subjective awareness that one is initiating, executing, and controlling one’s own volitional actions in the world. It is the pre-reflective awareness or implicit sense that it is I who is executing bodily movements or thinking thoughts. The sense of agency is tightly integrated with one’s “sense of ownership”, which is the pre-reflective awareness or implicit sense that one is the owner of an action, movement or thought. Miyazaki et al. reported that one can recognize oneself in the mirror due to the sense of agency [14]. Karin et al. reported showed that the combination of first-person perspective and sense of agency (imagination that the arms were their own) can induce “Rubber Hand Illusion” [15]. Therefore we hypothesize the synchronous motion of the android can induce the sense of agency. 2 Design of Experiments for Psychological Evaluation 2.1 Aim of Experiments The aim of the experiments is to verify the psychological effects of the appearance and motion of the robots on the operator. We conducted the experiment in which an experimenter approached the robot (android etc.) to Hall’s intimate distance (30 cm) [16] after participant became familiar with the operation. By analyzing the subjective reporting and the SCR as a physiological indicator, the change of participant’s personal space was evaluated. The subjective reporting and the physiological response are measured when an experimenter approaches the android without the visual stimulus of the touch or the injection. Since we focus on the emotional and social responses, the scheme of the personal space is considered appropriate in our study. As described in previous section, we hypothesize that android robot which resemble human appearance can activate mirror neuron which leads to the sympathy. In order to verify this hypothesis, we compare the effects utilizing both android robots with male/female appearance and another type of humanoid robot. We also hypothesize the synchronous motion of the android can induce the sense of agency. In order to verify this hypothesis, the effects of synchronous motion and asynchronous (random) motion of the robot are investigated. 2.2 Robots Utilized for Experiments In order to verify this hypothesis, we compare the effects utilizing two types of humanoids: android robots “Actrod-F” with male/female appearance and an unhuman-like robot “HIRO” with mechanical appearance. Figure 2 (left) shows the android robots “Actroid-F” utilized for human-robot interaction studies and cognitive sciences [17]. Their appearance highly resembles human. The faces are made of soft silicon rubber by copying a real human face and has 7 degrees of freedom (DOF) that allows natural facial expressions such as smile, surprise, and anger. Three DOF, i.e. roll, pitch, yaw motion of the head can be controlled. All of the joints are controlled with pneumatic actuators, which leads to the robustness of the robot without the necessity of maintenance for years. The motions of the android can be synchronized with that of the target person using the operation system including a webcam and faceAPI software [18] that recognizes human faces shown in Fig. 3. The robot’s motion (roll, pitch, yaw) synchronizing with human operator was measured by motion capture system, delay of android’s motion was about 0.4 [s]. [] Fig. 2. Actroid-F (left) and HIRO (right) [] Fig. 3. Experimental setup Figure 2 (right) shows the humanoid robot “HIRO” developed by Kawada Industry [19]. It is an academic version of the humanoid “NEXTAGE”, and serves as a platform for performing manipulation tasks for handling and assembling parts in industrial domains. It has dual-arms (6 DOF x 2), pan-tilt camera head (2 DOF) with stereo camera pair, and a torso joint (1 DOF) for rotation. The configuration of the robot has a certain level of similarity with human, but its appearance still remains far from that of human. As HIRO has only pan and tilt for head motion, synchronizing motion is limited compared with the android robots. 2.3 Experimental Procedure Figure 3 shows an example of experimental setup. The participant sit face-to-face with the robot while wearing electrodes on fingers to measure the SCR. A webcam for capturing the face of the subject is placed between the subject and the robot. In each experiment, one of following three robots is utilized: - Android of same gender with the subject - Android of opposite gender of the subject - Unhuman-like robot In case of android, blinking motion is executed automatically to enhance natural appearance. The motion of the head of each robot is controlled in following manners depending on the experimental conditions: - Staying still in “Without Motion” condition - Executing natural idling motion randomly in “Asynchronous With Motion” condition - Moving synchronizing with the subject in “Synchronous” condition In Synchronous condition, the subject operates the head of the robot for about 2 min using the operation system in order for the participant to become sufficiently familiar with the operation. Then the experimenter makes interaction with the subject by asking simple questions, and the subject answers it with head motion of the robot. In each condition, a female experimenter approaches the robot from 150 cm to 30 cm from the right side while the robot keeps performing motion as described above. The duration of closest approach (30 cm) was about 2 s. Finally, the experimenter directly approaches the participant sitting in front of the robot to the distance of 30 cm for reference. The subject performs each condition in random order. In each condition, the experimenter approaches five times. 2.4 Evaluation Methods In order to make subjective assessment, the subject answers following questionnaire after performing each condition in the experiment: - Q1: Did you want to turn the face of the robot away from the experimenter’s face, when she approached the robot? - Q2: Did you feel embarrassed when she approached the robot? - Q3: Did you feel discomfort when she approached the robot? - Q4: Did you feel strangeness when she approached the robot? - Q5: Did you feel nervous when she approached the robot? - Q6: Did you feel as if the android’s body was your body? From Q1 to Q4 are questions about the personal space, and Q5 is a question about the sense of agency. Participant answered in 7-point Likert scale from −3 (I disagree very strongly) to +3 (I agree very strongly) for each question. For objective evaluation, SCR (Skin Conductance Response) is utilized. The SCR is the electrical conductance of the skin, which varies with its moisture level. Because the sweat glands are controlled by the sympathetic nervous system, the SCR is utilized as a measure of emotional and sympathetic responses [20]. The participant sitting face-to-face with the robot wears electrodes on a left index finger and a middle finger to measure the SCR as shown in Fig. 4. The SCR is sampled at 10 Hz by an A/D converter synchronized with event signals of the approach by the experimenter. Because the rise of the SCR generally delays for a few seconds after the stimulus, the peak value is identified within the period from the onset of approach to 2 s after the point of getting away from the robot as shown in Fig. 5. Then the baseline value is calculated as an average of 1 s before the onset of the approach. The differences between the baseline and the peak are calculated as ΔSCRs, which were averaged across five trials. [] Fig. 4. Electrodes of SCR [] Fig. 5. SCR sampling 3 Experiment 3.1 Experimental Subjects We conducted two experiments as follows: 1. (1) Comparison of effects between “same gender” and “opposite gender” androids. 14 male subjects (average: 21.1 years old) and 11 female subject (average: 20.1 years old) participated in the experiment.   2. (2) Comparison of effects between “human-like” and “unhuman-like” robots. 13 male subjects (average: 21.0 years old) participated in the experiment.   Both of the experimental protocols were approved by ethical review board of AIST and Tsukuba University. Informed consent was obtained from all participants before participation. 3.2 Experimental Results The result of SCR measurement in the experiment (1) is shown in Fig. 6. As there was a significant interaction between factors (F(4,96) = 8.11, p < 0.001) in ANNOVA, post-hoc analysis was conducted using Ryan’s method. In Synchronous (same gender) condition, measured SCR was significantly larger than other three Synchronous and Asynchronous conditions. [] Fig. 6. Result of SCR measurement in experiment (1) Figure 7 indicates the result of subjective reporting by questionnaire in experiment (1). Synchronous motion (same gender) condition had significantly higher value compared with Asynchronous without motion condition in Q1. Synchronous motion (same gender) condition had significantly higher value compared with all other conditions in Q6. There was also significant difference between Asynchronous motion (same gender) condition and Synchronous motion (opposite gender) condition. [] Fig. 7. Result of questionnaire in experiment (1) Figures 8 and 9 indicates the result of experiment (2) where the difference of android and “unhuman-like” humanoid robot is investigated. Figure 8 shows the SCR measurements. As there was a significant interaction between factors (F(4,32) = 2.81, p < 0.10) in ANNOVA, post-hoc analysis was conducted using Ryan’s method. It was found that both Asynchronous and Synchronous conditions with Unhuman-like Robot had lower SCR reaction compared with Synchronous condition with android. [] Fig. 8. SCR measurement in experiment (2) [] Fig. 9. Result of Questionnaire in experiment (2) Figure 9 indicates the result of questionnaire in experiment (2). All questions except Q4 had a significant interaction between factors (Q1:F(4,32) = 6.54, p < 0.10, Q2:F(4,32) = 21.03, p < 0.001, Q3:F(4,32) = 5.75, p < 0.005, Q5:F(4,32) = 15.03, p < 0.001, Q6:F(3,24) = 10.17, p < 0.001) in ANNOVA, post-hoc analysis was conducted using Ryan’s method. Synchronous Android condition had the highest value in Q5 (nervous), Q6 (body extension). Synchronous Android also had significantly higher value in Q2 (embarrassment) compared with conditions with unhuman-like humanoid robot. 4 Discussion and Conclusion The results of experiment (1) and (2) indicated that both human-like appearance of the robot and synchronous motion of the robot are important factors for inducing psychological effect. However, in order to explain the experimental results better, it is necessary to add the factor of “body extension” to the hypothesis as shown in Fig. 10, because significant differences were found among conditions in Q6 in both experiments. The body extension refers to the perception that external object such as tools to be a part of one’s body. It is different from “personal space” in a sense that personal space is a social phenomenon and body extension is not. However it is reported that the body extension can also induce psychological reaction such as SCR. [] Fig. 10. Revised model of psychological effect of synchronous human-like robot We also analyzed the experimental results for male subjects and female subjects separately. Then it was shown that SCR reaction was the highest in Synchronous Android (same gender) condition for female subjects, and the main factor was body extension. For male subjects, Synchronous Android (same gender) also had the highest SCR reaction, but the extension of personal space was confirmed to occur. These mechanism is illustrated as red and blue lines in Fig. 10. Finally, the SCR reaction obtained in both experiments are shown in Fig. 11 to clarify the effects of appearance and motion. The size of the solid circle indicates the size of response, and the outer circle indicates the standard error. It can clearly be seen that both appearance and synchronous motion strengthen the psychological effect on the operator. [] Fig. 11. SCR reaction obtained in experiments (Psychological effects of appearance and motion) Acknowledgement This research was supported by Grant-in-Aid for Scientific Research on Innovative Areas “Human-Robot Symbiosis” (KAKEN No. 21118002) from the Ministry of Education, Culture, Sports, Science and Technology, Japan. References 1. Mori, M.: The uncanny valley. Energy 7(4), 33–35 (1970) 2. Kamide, H., et al.: Development of a scale of perception to humanoid robots. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5830–5835 (2010) 3. Nakano, Y., et al.: Towards a model of face-to-face grounding. In: Proceedings of the 41st Annual Meeting of the Assoc for Comp Linguistics, pp. 553–561 (2003) 4. Breazeal, C.L.: Emotion and sociable humanoid robots. Int. J. Hum. Comput. Stud. 59(1), 119–155 (2003)CrossRef 5. Liu, C., et al.: Generation of nodding, head tilting and eye gazing for human-robot dialogue interaction. In: Proceedings of ACM/IEEE International Conference on Human–Robot Interaction, pp. 285–292 (2012) 6. Hofree, G., et al.: Bridging the mechanical and the human mind: spontaneous mimicry of a physically present android. PLoS ONE 9(7), e99934 (2014)CrossRef 7. Kranch, S., et al.: Can machines think? Interaction and perspective taking with robots investigated via fMRI. PLoS ONE 3(7), e2597 (2008)CrossRef 8. Takahashi, H., et al.: Different impressions of other agents obtained through social interaction uniquely modulate dorsal and ventral pathway activities in the social human brain. Cortex 58, 289–300 (2014)CrossRef 9. Tanaka, K., et al.: Change of personal space induced by operation of android robot synchronized with operator. In: Proceedings of the IEEE/SICE International Symposium on System Integration (SII 2013), pp. 346–351 (2013) 10. Sommer, R.: Studies in personal space. Sociometry 22, 247–260 (1959)CrossRef 11. Pedersen, D.M.: Development of a personal space measure. Psychol. Rep. 32, 527–535 (1973)CrossRef 12. Cristina, G.L., et al.: Towards a neuroscience of empathy: ontogeny, phylogeny, brain mechanisms. Psychopathol. Neurosci. 37, 1537–1548 (2013) 13. di Pellegrino, G., et al.: Understanding motor events: a neurophysiological study. Exp. Brain Res. 91(1), 176–180 (1992)MathSciNetCrossRef 14. Miyazaki, M., et al.: The sense of agency: a key factor in the emergence of self-recognition. Congn. Stud. 18(1), 9–28 (2011). (in Japanese) 15. Hägni, K., et al.: Observing virtual arms that you imagine are yours increases the galvanic skin response to an unexpected threat. PLoS ONE 3(8), e3082 (2008)CrossRef 16. Hall, E.D., et al.: The Hidden Dimension. Anchor Books (1966) 17. Yoshikawa, M., et al.: Development of an android robot for psychological support in medical and welfare fields. In: Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO 2011), pp. 2378–2383 (2011) 18. faceAPI, Seeing Machines (2011). http://​www.​seeingmachines.​com 19. HIRO, Kawada Industries (2011). http://​global.​kawada.​jp/​mechatronics/​hiro.​html 20. Hein, G., et al.: Skin conductance response to the pain of others predicts later costly helping. PLoS ONE 6(8), e22759 (2011)CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_72 Collective Cognition and Sensing in Robotic Swarms via an Emergent Group-Mind Michael Otte¹   (1) Aerospace Engineering Sciences Department, University of Colorado at Boulder, Boulder, USA     Michael Otte Email: ottemw@gmail.com Abstract Algorithms for robotic swarms often involve programming each robot with simple rules that cause complex group behavior to emerge out of many individual interactions. We study an algorithm with emergent behavior that transforms a robotic swarm into a single unified computational meta-entity that can be programmed at runtime. In particular, a swarm-spanning artificial neural network emerges as wireless neural links between robots self-organize. The resulting artificial group-mind is trained to differentiate between spatially heterogeneous light patterns it observes by using the swarm’s distributed light sensors like cells in a retina. It then orchestrates different coordinated heterogeneous swarm responses depending on which pattern it observes. Experiments on real robot swarms containing up to 316 robots demonstrate that this enables collective decision making based on distributed sensor data, and facilitates human-swarm interaction. Keywords Robotic swarmGroup mindNeural networkEmergent behaviorCoordinationDistributed sensingMulti-agent system 1 Introduction, Motivation, Related Work The human brain is composed of roughly [$$10^{15}$$] connections between [$$10^{11}$$] neurons that self-assemble during development as cells respond to local stimuli [1]. While this feat is impressive in the physical sense, it is also compelling from a computational point-of-view. Human and animal societies containing upwards of [$$10^9$$] individuals also spontaneously form, have collective intelligence [2], and exert globe-changing collective behaviors that result as the product of countless local interactions [3]. In many ways societies are meta-organisms [4] in which individuals take the roll of cells and communication substitutes for neural connectivity. Science fiction authors [5–7] have taken this analogy further — imagining that a group of individuals linked by neural connections might form a collective “group-mind” defined by shared awareness, pooled computational resources, etc. Nonfictional artificial neural networks (ANNs) were inspired from biology and developed in the 1950s [8], and have proved capable of learning complex tasks across many domains [9–11]. We combine ANNs with swarm robotics and wireless communication to produce an artificial group-mind in which a swarm of autonomous robots spontaneously self-assembles into a single computational entity. The resulting entity is trained to distinguish between various heterogeneous patterns in the global environment — using the swarm’s distributed sensors much like the retina cells in an eye — and then coordinates a collective yet heterogeneous swarm response based on the specific pattern it observes. [] Fig. 1. Emergent group-mind neural network. (A) Each robot maintains a slice of neurons (depicted along the vertical axis) and forms neural connections with its neighbors. (B) The Kilobot robot that we use. Swarm roboticists [12, 13] often design algorithms that leverage emergence [14, 15], the idea that simple behaviors performed in concert by a group of interacting individuals can produce complex global behaviors like bird flocking [13, 16], termite nest construction [17], food foraging [18], and collective transport [19, 20]. In robotic self-assembly the emergent product is a robotic mega-structure created as robots physically arrange themselves within the environment [21]. Our emergent product is the artificial group-mind itself — a computational entity that can be programmed by a human user at run-time to perform explicit complex tasks. “Mind vs. body” is an appropriate metaphor for the distinction between an artificial group-mind and a self-assembling robot; the distinction vs. other emergent swarm behaviors like foraging is akin to that between the nervous system and other distributed bodily systems, e.g., the immune system. Although a robotic swarm is a natural host for an artificial group-mind, intelligent materials [22, 23] could also be used. 2 Technical Approach The swarm consists of 3.3 cm Kilobots [24] (see Fig. 1). Kilobots locomote via vibration, communicate wirelessly using infrared light (range 10 cm), have AtMega328 microprocessors (8 Mhz 32 K memory), visual light-intensity sensors, and a multi-color light emitting diode (LED). A digital light projector mounted above the environment controls environmental light intensity patterns by projecting 50 by 50 pixel grayscale images onto the swarm. Light projections are used both for human-to-swarm communication and also to create various global light patterns which the swarm is trained to differentiate. [] Fig. 2. The group-mind emerges as robots form neural connections with communication neighbors. It is trained to differentiate between global environmental patterns (represented here by the flags of Japan and France) detected across the swarm’s collective sensors. The output behavior of the swarm (represented by shapes ‘E’ vs. ‘W’) depends on the pattern (flag) that is observed. Use of the artificial group-mind is depicted in Fig. 2; pseudocode appears in the Appendix. Robots receive identical individual programming and are distributed in the environment a priori. Although distribution could be accomplished autonomously, see e.g., [24], we manually place robots in the environment to conserve battery life and memory space. The training algorithm assumes that no robot has two or more neighbors with the same identification number. A distributed algorithm is used to ensure this happens with probability 1. The swarm’s light sensors are calibrated to correct for imperfections in sensitivity and systematic differences in brightness caused by projector distance. Two sets of projected images are used to upload data to the swarm. The first is a set of raw environmental feature patterns. The second is a corresponding set of swarm behavioral response patterns. In general, images in both sets may be spatially heterogeneous. Behaviors are encoded via a predefined mapping from normalized grayscale values, and may include simple or complex functionality such as “display red LED” or “move toward brighter light.” The group-mind learns to differentiate between the environmental feature patterns during a training phase. After the training phase, the swarm will perform the corresponding behavioral response for whatever feature pattern the group-mind currently sees in the environment. The artificial group-mind is a swarm-spanning feed-forward ANN that emerges as robots form ongoing wireless neural communications with their neighbors. It uses a slice-parallel implementation of the backpropogation ANN training algorithm [25] that we have specially modified for use with unreliable wireless communication. Conventional ANNs assume reliable communication, which wireless is not. Instead, we temporarily pause training on any robot that becomes more than B training iterations out-of-sync with any of its neighbors, until those neighbors reestablish communication and catch up. [$${0<B<\infty }$$] is a predefined constant and [$$B=100$$] in experiments. Each robots is responsible for maintaining a slice of L neurons within the group-mind (Fig. 1), where L is defined a priori. [$$L=2$$] in experiments, i.e., there is one hidden layer. Connections are established from neurons at layer [$$\ell $$] on a robot i to those at layer [$$\ell +1$$] on each of its neighbors [$$j \in N\_i$$] for [$${0 \le \ell < L}$$], where [$$N\_i$$] is the neighbor set of i, and i is considered a neighbor of itself ([$$i \in N\_i$$]). The signal values at layer 0 are set by the real-time environmental sensor (light sensor) readings. The output signal from layer L on a particular robot i is used to determine i’s behavior at run-time. As with standard ANNs, each neuron’s output is calculated by performing a weighted sum over incoming signals and then passing the result through a step-like function (the hyperbolic tangent from [26] is used in our experiments). Training the group-mind via the backpropogation training algorithm involves adjusting link weights to improve performance vs. the training example set. This involves sending update messages in the backward direction. Updates from the final layer L contain the signal error for each training example, and those from internal layers [$$\ell < L$$] encode the cumulative error at L ascribed to local error at [$$\ell $$]. Given updates from [$$N\_i$$], a node i at layer [$$\ell $$] can adjust the weights assigns incoming neural signals such that overall ANN performance improves. This results in a form of gradient descent. In practice, a Robot stops training once its local error has fallen below a predefined threshold (5%). [] Fig. 3. Experiment without movement. The light pattern tuning set (A) and test set (B) used for experiments with 303 and 316 robot swarms, respectively. Top rows are raw light intensity feature patterns and bottom rows the corresponding output behaviors (represented by color) the group-mind must learn to perform in response. Feature patterns projected onto the swarm (C,D) for tuning and test cases, and the behavior that was actually performed as a result. Classification accuracy vs. different input patterns (E). 3 Experiments We experiment with swarms of varying size and two different classes of swarm response behavior (stationary vs. moving robot positions). Tuning parameters are chosen using a tuning data set that is different from the test data set. Figure 2 shows experiments in which the collective responses involve displaying 2-D color images across the swarm’s distributed LED array. Robot behaviors include display “red LED,” “blue LED,” and “off LED.” Thus, the swarm is able to display a yin-yang, wifi symbol, etc., by having different robots perform different behaviors. In these experiments, the swarm may safely use its group-mind as it is being trained, allowing direct assessment of the group-mind’s training status via its improving collective behavior. Figure 3 depicts a set of experiments in which the response behaviors require physical movement in order to create one of two different shapes (blue smiley face or red frown face) depending on which environmental feature pattern is observed at run-time (peace and biohazard symbols, respectively). Robot behaviors include: “Random-Search,” “Red-Attract,” “Blue-Attract,” and “Continue-Training.” Red-Attract and Blue-Attract cause a robot to broadcast “Attract” messages while remaining stationary and displaying red or blue LEDs, respectively. A robot performing Random-Search will move around the environment at random until receiving an “Attract” message sent from closer than 5 cm, in which case it halts and displays a white LED. Physical shapes emerges as Random-Search robots move from their original positions to fill the space around attracting robots (or leave the environment). [] Fig. 4. Experiments with movement. Light intensity pattern test set (A) for experiments with movement. (B) Results (top to bottom): the swarm’s training status when movement started, classification accuracy vs. the raw light intensity pattern that initiated movement, and the breakdown of the swarm’s behavior at experiment end. (C-G) Columns correspond to experiments. (C) Training data (light intensity and output behavior pattern, top and bottom) for the behavior that was eventually chosen. (D) Training status when movement started. (E) real-time light intensity feature data and resulting output behavior (top, top and bottom). (F) Swarm position at experiment end. (G) Swarm behavior at experiment end. The “Continue-Training” behavior causes a robot to continue training until its training error has fallen below 5%, and then to display a yellow LED. By training the group-mind to “Continue-Training” in response to a (uniform medium-gray) pattern displayed during training, the overall group-mind training status can be evaluated by observing the proportion of the swarm displaying yellow LEDs. Physical movement breaks neighborhood connectivity which causes the group-mind to dissolve. Thus, the group-mind must coordinate an organized deliquesce prior to the start of movement. Each robot i continually evaluates the group-mind’s calculation of i’s output behavior based on the real-time distributed sensor data. If this behavior is not “Continue-Training” for more than a predefined length of time (30 s), then robot i begins performing the prescribed behavior while broadcasting messages indicating the pattern detected. Any robot [$$j\ne i$$] in a poorly trained subset of the group-mind can calculate its own behavior by combining the data from i’s message with its own behavior map. j then performs the appropriate behavior and re-broadcasts the message from i. 4 Main Experimental Insights Drained batteries were an unexpected difficulty. This problem could be minimized by replacing batteries prior to an experiment and/or modifying the hardware or output behaviors to be more power efficient. However, one reason for using swarms is their robustness vs. partial loss. In experiments where color LED images were the desired output, a dead battery simply meant that a particular robot’s LED was dark. In experiments with movement, the desired output shape was discernible despite moderate (up to 26%) loss. Algorithmically, if a robot looses power during the training process, then its neural signals freeze from a neighbor’s point-of-view. If such a frozen signal is detrimental to a neighbor’s neural performance then it will be weighted less-and-less over time. However, because robot i to pauses training after becoming B iterations out-of-sync with j, power loss on one robot can potentially pause training across the entire swarm. Although a full-scale failure was not observed in the experiments, this is clearly a weakness of our algorithm. Neighbor pruning could potentially alleviate this problem. For instance, perpetually uncommunicative neurons could have their last known signals treated as fixed input by their neighbors and then subsequently ignored. Although this would technically break the theoretical convergence guarantees of our modified backpropogation training algorithm, these guarantees are also broken whenever a robot becomes permanently uncommunicative (and thus forfeit in the event of power loss anyway). 5 Results We have performed a variety of experiments on real robot swarms containing up to 316 robots. These provide proof-of-concept that an artificial group-mind can emerge as the result of distributed computation across a robotic swarm, is a useful tool for human-swarm interaction, and enables fine-grained heterogeneous swarm behavior to be programmed at run-time and at a high level by a human user. In particular, the group-mind is capable of detecting and classifying heterogeneous feature patterns across the global environment, and orchestrating a collective heterogeneous swarm response. The simple behaviors exhibited in the experiments could easily be replaced by more sophisticated behaviors with no change to the training and decision making algorithms. Other environmental features (chemical, temperature, acoustic, etc.) could easily be used in place of light intensity. The ad-hoc process in which neural connections form in the group-mind is a departure from traditional ANNs, and echoes similar emergent neural linking in the animal brain. Acknowledgments This work would not have been possible without Michael Rubenstein, who designed the Kilobot platform, taught the author how to use it, and provided invaluable feedback on this work. The author is grateful for the knowledge, resources, encouragement, and advice provided by Radhika Nagpal and Melvin Gauci. The author are also grateful to Derek Kingston for providing the time, space, and freedom to pursue this problem. This work was funded by the Control Science Center of Excellence at the Air Force Research Laboratory (CSCE AFRL), the National Science Foundation (NSF) grant IIP-1161029, and the Center for Unmanned Aircraft Systems. This work was performed while Michael Otte was “in residence” at CSCE AFRL. Appendix: High-level Pseudo Code Each robot in the swarm runs identical code. Two different “main” procedures are presented. The first is for situations in which the output behavior of the swarm does not involve movement or other actions that will break the group mind’s network connectivity (Algorithm 2). The second is for situations in which the output behavior is expected to break connectivity, and so the group mind must organize an orderly dissolution back to a non-group-mind swarm (Algorithm 3). In addition to the main thread, each robot runs a separate message broadcast thread at approximately 2 Hz (Algorithm 4), and has a callback function to receive incoming messages (Algorithm 5), respectively. Global data is accessible across all threads and functions. [] [] The start-up procedure appears in Algorithm 1 and corresponds to the steps in Fig. 2 between “Local ID Agreement” and “Data Upload.” Each robot uses a state machine that is initialized to state [$$\mathrm {NOT\\_YET\\_TRAINING}$$] (line 1). A Boolean value [$$done\\_training$$] is also used to track when training has resulted in an acceptable level of accuracy (on this robot). The battery charge is used to seed a pseudo-random number generator so that different pseudo-random number sequences will be generated on each robot with high probability. A distributed algorithm is used to ensure that neighboring robots have unique randomly determined IDs (line 4). Light sensors are calibrated (line 5). Neighbors are discovered and outgoing wireless links to their neurons are created and initialized with random weights (line 6). Data is uploaded to the swarm from a human user via visual light projection following a predefined procedure (line 7). State [$$\mathrm {TRAIN}$$] indicates the start-up phase has ended (line 8). [] The main thread for non-movement cases appears in Algorithm 2. All signals sent along neural connections are tagged with the number of training iterations this robot has completed. The function [$$\mathrm {out\\_of\\_sync}()$$] returns [$$\mathbf {true}$$] whenever this robot has gotten too many training iterations ahead of its neighbors (100 in our experiments). The backpropogation training algorithm is run one iteration at a time (line 4) — but only if the training error needs improvement and this robot is not out-of-sync with its neighbors (line 3). A robot stops training once its local error has fallen below [$$5\%$$] (lines 5–6). This robot uses the subroutine [$$\mathrm {use\\_group\\_mind}(\mathrm {sample\\_light()})$$] to both provide its current light sensor reading to the group mind, and to learn the group mind’s prediction of the overall swarm behavior [$$\tau $$] that should be performed (line 7). The single robot behavior [$$behaviour$$] this robot performs as part of [$$\tau $$] is also returned, and determined within [$$\mathrm {use\\_group\\_mind}(\mathrm {sample\\_light()})$$] by querying a local look-up table with the value of [$$\tau $$]. The look-up table is populated with the local mapping from [$$\tau $$] to [$$behaviour$$] during the data upload portion of the start-up phase. [] [] The main thread used in cases involving movement appears in Algorithm 3. Differences vs. Algorithm 2 (no movement) appear on lines 3 and 8–15. Movement destroys the group mind; thus, movement should only start once the group-mind is highly certain it has calculated the correct response behavior. This is facilitated by adding state [$$\mathrm {CONSIDER}$$] to the state machine, and also by defining one of the behaviors to be “continue training.” In practice, the swarm is trained to continue training in response to a neutral gray light pattern, which is then displayed during the training phase. [$$\mathrm {CONSIDER}$$] can only be accessed once a robot believes the desired behavior is no longer “continue training” (lines 9–12). The function [$$\mathrm {consideration\\_time\\_exhausted()}$$] is used to ensure a robot remains continuously in state [$$\mathrm {CONSIDER}$$] for a predetermined amount of time before switching to state [$$\mathrm {ACT}$$] to perform the prescribed behavior (lines 11–14). This adds robustness to erroneous outputs from partially trained models. Algorithm 4 depicts the message broadcast thread. Function [$$\mathrm {get\\_neural\\_data()}$$] retrieves the neural network data that resides on this robot’s portion of the group mind (line 3). For each training example as well as the real-time environmental sensor input, this includes both the forward neural signals and backpropogation messages (including training iteration number and, for each backpropogation message, the destination ID). Neural data is broadcast, along with this robot’s state and ID (line 4). In practice, due to the Kilobots’ small message payload size (9 bytes), we must divide each batch of neural network data across multiple messages (not shown). If there is movement behavior such that state [$$\mathrm {ACT}$$] is used, then the robot sends this state, its ID, and the swarm behavior class [$$\tau $$] output of the neural network vs. real-time environmental data (lines 5–6). To save space we omit the other message passing details necessary to run the standard distributed algorithms that we employ as subroutines during the start-up phase (represented by lines 7–8). The receive message callback function appears in Algorithm 5. Normal training data is received on lines 2–5. If a neighbor has decided to act (e.g., move) then this robot will join it (lines 6–9); making sure to perform its own prescribed behavior [$$behaviour$$] relevant to the overall swarm behavior [$$\tau $$] (line 9). The function [$$\mathrm {modify\\_behaviour}(behaviour,sender\\_behaviour,sender\\_distance)$$] is used to modify the specific output behavior of this robot during the [$$\mathrm {ACT}$$] phase, as a function of interaction with neighboring robots (lines 10–14). This enables more complex swarm behaviors to emerge out of the interactions between robots. For example, the smiley faces in our experiments are created as randomly searching robots stop moving in the vicinity of attracting robots. Lines 15–16 represent other message processing that is used for the distributed subroutines within the start-up phase. References 1. Chialvo, D.R.: Emergent complex neural dynamics. Nature Phys. 6(10), 744–750 (2010)CrossRef 2. Lévy, P.: L’intelligence collective: pour une anthropologie du cyberspace, vol. 11. La Découverte, Paris (1994) 3. Green, D.G.: Emergent behavior in biological systems. In: Green, D.G., Bossomaier, T.J. (eds.) Complex Systems: From Biology to Computation, pp. 24–35. IOS Press (1993) 4. Wheeler, W.M.: The ant-colony as an organism. J. Morphol. 22, 307 (1912)CrossRef 5. Stapledon, O.: Last and First Men: A story of the Near and Far Future. Penguin Books, London (1937) 6. Asimov, I.: Foundation and Earth. Foundation series Doubleday (1986) 7. Berman, R., Piller, M., Taylor, J., Taylor, M., Price, A.S., Gaberman, M.: Collective. Star Trek Telivision Series Episode VOY.6.16, Directed by Allison Liddi, Based on Concept by Gene Roddenberry (February 2000) 8. Farley, B., Clark, W.: Simulation of self-organizing systems by digital computer. Trans. IRE Prof. Group Inf. Theor. 4(4), 76–84 (1954)MathSciNetCrossRef 9. Pomerleau, D.A.: Efficient training of artificial neural networks for autonomous navigation. Neural Comput. 3(1), 88–97 (1991)CrossRef 10. Cireşan, D., Meierand, U., Gambardella, L., Schmidhuber, J.: Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22(12), 3207–3220 (2010)CrossRef 11. Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRef 12. Amkraut, S., Girard, M., Karl, G.: Motion studies for a work in progress entitled eurnythmy. SIGGRAPH Video Rev., 21 (1985) 13. Reynolds, C.W.: Flocks, herds, and schools: a distributed behavioral model. Comput. Graph. 21(4), 25–34 (1987)CrossRef 14. Matarić, M.J.: Interaction and Intelligent Behavior. Ph.D. thesis (1994) 15. Martinoli, A.: Swarm intelligence in autonomous collective robotics: from tools to the analysis and synthesis of distributed control strategies. Ph.D. thesis, Ecole Polytechnique Fédérale de Lausanne (1999) 16. Ferrante, E., Turgut, A.E., Huepe, C., Stranieri, A., Pinciroli, C., Dorigo, M.: Self-organized flocking with a mobile robot swarm: a novel motion control method. Adaptive Behavior (2012) 17. Werfel, J., Petersen, K., Nagpal, R.: Designing collective behavior in a termite-inspired robot construction team. Science 343(6172), 754–758 (2014)CrossRef 18. Steels, L.: Cooperation between distributed agents through self-organisation. In: Proceedings of IEEE International Workshop on Intelligent Robots and Systems ’90, IROS 1990, pp. 8–14, July 1990. Towards a New Frontier of Applications 19. Becker, A., Habibi, G., Werfel, J., Rubenstein, M., McLurkin, J.: Massive uniform manipulation: Controlling large populations of simple robots with a common input signal. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 520–527, November 2013 20. Wilson, S., Pavlic, T.P., Kumar, G.P., Buffin, A., Pratt, S.C., Berman, S.: Design of ant-inspired stochastic control policies for collective transport by robotic swarms. Swarm Intell. 8(4), 303–327 (2014)CrossRef 21. Groß, R., Bonani, M., Mondada, F., Dorigo, M.: Autonomous self-assembly in swarm-bots. IEEE Trans. Robot. 22(6), 1115–1130 (2006)CrossRef 22. Gilpin, K., Knaian, A., Rus, D.: Robot pebbles: one centimeter modules for programmable matter through self-disassembly. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2485–2492. IEEE (2010) 23. Butera, W.J.: Programming a paintable computer. Ph.D. thesis (2002) 24. Rubenstein, M., Cornejo, A., Nagpal, R.: Programmable self-assembly in a thousand-robot swarm. Science 345, 795–799 (2014)CrossRef 25. Farber, P., Asanovic, K.: Parallel neural network training on multi-spert. In: 1997 3rd International Conference on Algorithms and Architectures for Parallel Processing, ICAp 1997, pp. 659–666, December 1997 26. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient backprop. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012). doi:10.​1007/​978-3-642-35289-8\_​3 CrossRef © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_73 Recognizing Unfamiliar Gestures for Human-Robot Interaction Through Zero-Shot Learning Wil Thomason¹   and Ross A. Knepper¹   (1) Department of Computer Science, Cornell University, Ithaca, USA     Wil Thomason (Corresponding author) Email: wbthomason@cs.cornell.edu   Ross A. Knepper Email: rak@cs.cornell.edu 1 Introduction Human communication is highly multimodal, including speech, gesture, gaze, facial expressions, and body language. Robots serving as human teammates must act on such multimodal communicative inputs from humans, even when the message may not be clear from any single modality. In this paper, we explore a method for achieving increased understanding of complex, situated communications by leveraging coordinated natural language, gesture, and context. These three problems have largely been treated separately, but unified consideration of them can yield gains in comprehension [1, 12]. Gesture recognition has been an area of investigation from the early days of computer vision, but modern gesture recognition systems remain fragile. Most approaches focus on speed and accuracy of recognition, yet remain restricted to a fixed gestural lexicon [2, 6, 7, 13, 18, 25] and cannot recognize gestures outside of a small pre-trained set with any accuracy [2, 14]. Our work departs from this traditional model in that the set of gestures it can recognize is not limited to the gestural lexicon used for its training. Even in simplified domains, naive classifiers can fail to recognize instances of trained gestures due to human gestural variability. Humans resort to gesture when speech is insufficient, such as due to inability to recall a word, inability to be heard, or inadequate time to formulate speech. For these reasons, gesture is prevalent in human discourse. Yet gestures defy attempts at canonical classification both due to variations within and among individuals and due to their subjective interpretations. We define the unfamiliar gesture understanding problem: given an observation of a previously unseen gesture (i.e. a gesture of a class not present in any training data given to the system), we wish to output a contextually reasonable description in natural language of the gesture’s intended meaning. This problem is an instance of the machine learning problem of zero-shot learning, a burgeoning area of machine learning that seeks to classify data without having seen examples of its class in the training stage. Most prior work in the area [10, 16, 19] makes use of a multimodal dataset to perform the zero-shot task. However, the zero-shot task has not yet been demonstrated for gestural data. In the related one-shot learning task, gesture understanding has been shown from only one example of a given class in the training stage [21–23]. The primary drawback of such approaches is their reliance on a fixed lexicon of gestures. We remove this drawback by creating a novel multimodal embedding space using techniques from convolutional neural nets to handle variable length gestures and allow for the description of arbitrary unfamiliar gestural data. The ChaLearn 2013 multi-modal gesture recognition challenge explored techniques for increasing the robustness of understanding by combining gesture and text [5]. However, the entries still only recognize a small fixed set of gestures. Other work in situated multimodal understanding systems has been limited to combining simple diectic (pointing) gestures with speech, to differentiate among a small set of referent objects [3]. These pointing gestures represent a small and relatively simple subset of human gestures. Work in another direction has investigated the use of gestures by robots [9, 17]. Work in this area has focused on studying which gestures are most effective in robotic storytelling (e.g. Huang and Mutlu [9]), or on creating systems to make it easier for humans to encode gestures for robots to make. We aim to provide understanding of gestural meaning. Finally, the work of Takano, Hamano, and Nakamura [20] moves toward a general association between word labels and gestures through the use of correlated vector spaces. This work is focused on the retrieval of relevant motion data for a word query from a database, whereas our work seeks to construct a mapping from gestures to words. In general, the state of the art in recognition and gestural understanding, appears to be limited to pointing gestures as in [3], and other gestural recognition techniques which have been developed independently of robotic applications. In this paper, we contribute a novel approach to understanding unfamiliar language, gesture, and context in order to be able to understand diverse and varied gestures. 2 Technical Approach Two key insights of our approach to derive meaning from unfamiliar gestures are to recognize physical similarities among gestures by commonalities in their constituent “sub-gestures” and to leverage redundant information contained in simultaneous, situated speech and gesture. We begin with some intuition for these two insights. First, whereas gestures with similar high-level physical form do not always have similar meanings, many gestures with related meanings share common “sub-gestural” motion components. For instance, pushing and pointing gestures both involve an outward motion, indicating a semantically-related position away from the gesturer. Second, a common mode of gestural use in conversation is to add redundancy to spoken information to increase the chance of the speaker’s meaning being correctly inferred. For example, when giving instructions, a speaker may make gestures that represent physically the actions their words describe. By sampling coincident speech and gesture in a variety of contexts, we can therefore construct from experience an approximate partial map between the meanings of the two modes of communication. Intuitively, these two insights combined allow us to understand unfamiliar gestures. First, we can exploit the structural similarity of gestures with related meanings to map an unfamiliar gesture to a location in an embedding space of gestures that reflects its relation to other gestures we have previously seen. We can then use this placement and the partial map between gestures and speech that we have established during training to determine a reasonable meaning for the unfamiliar gesture. 2.1 Details Our approach is built around a multi-stage pipeline which takes individual gestures formatted as RGB-D data as its input and outputs a natural-language description of the gesture. The stages of the pipeline are as follows, in order: Gesture Embedding: The first step of our approach is to create an embedding space mapping gestures to the corresponding words. We begin by splitting a gesture into its constituent sub-gestural motions. For a gesture g encoded as a series of RGB-D frames, we first partition the frames of g into windows of 120 ms, each overlapping by 20 ms. The purpose of these windows is to approximate sub-gestures. We rely on this approximation due to the recursive structure of the sub-gestural model: gestures are composed of sub-gestures, which may themselves be composed of sub-gestures, and so on. Thus, we use short overlapping windows to attempt to capture the “first level” of this structure, i.e. the sub-gestures which directly compose into gestures. The duration of these windows and their overlap was determined empirically. In future work, we hope to explore the possibility of dynamically-sized windows or other means of more accurately segmenting sub-gestures. Next, we extract the human skeleton H of the user from each window, and compute the velocity [$$\varvec{v\_{j\_i}}$$] of the joints [$$j\_1,\ldots ,j\_n$$] comprising H for each frame in the window. This process results in a time series [$$\varvec{V}$$] of joint velocities in the window. We complete feature computation by computing the discrete Fourier transform [$$\varvec{\psi \_g}$$] of [$$\varvec{V}$$]. Specifically, we compute for each joint the 3-D Fourier transform of its velocity in [$$\varvec{V}$$]. This feature is inspired by Kondo et al. [13] in its use of a transform of joint velocities as a means of describing gestures. However, we differ from Kondo et al. [13] in several ways. First, our feature representation is over sub-gestures rather than whole gestures. This difference is key to our model of gestures as a composition of smaller semantic units. Second, the features used in Kondo et al. [13] are histograms of frequency domain transforms of gestures, whereas this work uses the raw frequency domain representation of each sub-gesture. After computing [$$\varvec{\psi \_g}$$], we use it as the input to a neural network. This network is composed of two 1-D convolutional layers separated by a max pooling layer to allow for variable-length inputs, and followed by three fully-connected layers. This structure is simply a standard multi-layer perceptron placed atop a two-layer convolutional architecture often used in the field of object classification. The architecture of this network was chosen for its simplicity and ease of training; we hope to investigate the use of alternate architectures with our sub-gestural feature descriptor and zero-shot learning model in future work. We assume that there exists a bag of words [$$W=\{\varvec{w\_1},\ldots ,\varvec{w\_k}\}$$] associated with each g, where each [$$\varvec{w\_i}$$] is encoded as a vector in a pre-trained word embedding (in particular, we use Word2Vec [15]). At training time this is given; in practical usage we aim to recover this bag of words. As such, we train the network to minimize the following loss function, where f is the function computed by the network: [$$\begin{aligned} \mathcal {L}(\varvec{\psi \_g},W)=\left\| \frac{\sum \_{\varvec{w\_i}\in W} \varvec{w\_i}}{k} - f(\varvec{\psi \_g})\right\| \end{aligned}$$] (1) This loss function is simply the norm of the difference between the centroid in the pretrained word embedding space of the words corresponding to g and where in this space f places g. In other words, we learn a mapping which places gestures closest to those words most strongly associated with them. In usage, we compute [$$f(\varvec{\psi \_g})$$] and examine its k nearest neighbors in the word embedding space to approximate of the set of words most strongly associated with g. Salience Heuristic: Although the above multimodal embedding produces a set of candidate words to describe a gesture, it does not take into account any notion of dynamic context, i.e. context from specific, recent interactions. We propose a simple salience heuristic to filter down the set of possible descriptor words as the final stage in our pipeline. This heuristic, which is inspired by Eldon, Whitney, and Tellex [3], imposes an ordering on the candidate descriptors by computing a variant on the common tf-idf metric [11] for each. This variant is a direct analogue of tf-idf for the gestural context, and computes: [$$\begin{aligned} \mathcal {S}(w)=\left( 1+\log \left( \sum \_{i=1}^m\frac{1}{i}\mathcal {I}\_w(\mathcal {O}\_i)\right) \right) \cdot \left( \log \left( 1+\frac{N}{\sum \_{i=1}^N \mathcal {I}\_w(\mathcal {C}\_i)}\right) \right) \end{aligned}$$] (2) where the [$$\mathcal {O}\_i$$] are the m most recent bags of words recorded by the system (in the order of recording), the [$$\mathcal {C}\_i$$] are bags of words associated with known (training) gestures, [$$\mathcal {I}\_w(x)$$] is an indicator function that is 1 if word w is present in bag of words x, and 0 otherwise, and N is the total number of known gestures. This heuristic therefore favors words which have recently been relevant to gestures used in the current conversation (i.e. favoring topic continuity) while avoiding words which are relevant to a large number of gestures and are therefore unlikely to be very specific descriptors of a given gesture. If the embedding in Sect. 2.1 returns k possible descriptors, the top [$$\ell <k$$] according to their ranking by [$$\mathcal {S}$$] are chosen for the final output of the system. 3 Experiments We have conducted several experiments to validate the performance of our technique. 3.1 ChaLearn Dataset We have conducted preliminary experiments assessing the performance of both the zero-shot learning model and the salience heuristic. 3.2 Zero-Shot Model We trained our zero-shot model on a subset of the data from Guyon et al. [8] consisting of surgical hand signals. As these data did not include the language accompanying the gestures, we created a set of plausible accompanying words for each gesture, constructed by randomly sampling salient words from a textual description of the surgical instruments indicated by each class of gesture. We withheld all examples of the straight scissors class from the training process as test data. After training, we evaluated the performance of the model at generating reasonable descriptions for gestures from both the known and unknown classes. As shown in Fig. 1, we are able to successfully generate sets of words describing each gesture, regardless of whether or not the gesture’s class was present in the training data. We note that holding out several classes produced lower-quality results; however, given that our training dataset was very small (100 gestures, total), we attribute this drop in performance to this change causing insufficient training data. [] Fig. 1. The output of our zero-shot learning system for both known (syringe) and unknown (straight scissors) classes of gesture. We have also performed an experiment in which we held out each class of surgical gesture in turn, and assessed the performance of our system. The goal of our unfamiliar gesture understanding system is to produce clusters of words for a gesture which a human would agree were reasonably associated with said gesture. As such, we have devised the following metric of performance: For each bag of words returned by our system, we label the result as “Not Relevant” if it contains fewer than four words deemed relevant to the input gesture by a human, “Relevant” if it contains between five and eight such words, and “Very Relevant” if it contains nine or ten such words (the size of the returned bag of words is ten). The results of our system’s performance according to this metric are shown in Fig. 2. [] Fig. 2. The performance of our unfamiliar gesture understanding system for each held out class of surgical gesture As may be seen, we achieve a majority of “Relevant” or “Very Relevant” results in a significant number of cases. However, there are notably some cases (such as when Army-Navy Retractor is the held-out class) for which our system performs very poorly. However, given the very low suitability of the ChaLearn data for our task, these results still demonstrate that our system is capable of providing reasonable descriptions of unfamiliar gestures. 3.3 Salience Heuristic To test the performance of our salience heuristic, we constructed a set of “conversations” composed of a sequence of simulated past outputs of our system and a simulated output of our zero-shot model (as the next element in the sequence). We then applied our salience heuristic to these data, and qualitatively assessed the results in terms of the salience of the words selected. We show an example of these results in Fig. 3. The result shown is for a shortened conversational sequence due to space constraints; we assessed the system on longer sequences. As shown, we succeed in selecting descriptors which are more recently relevant and more relevant to the conversation overall. We ran trials on a large number of simulated conversations, injecting intentionally irrelevant terms into the input and testing if they were removed (without removing the relevant terms) after passing the conversation and input through the heuristic. In 61% of trials for these simulated data, we found that the heuristic scored the inserted irrelevant words as less relevant than the inserted relevant words, as desired. [] Fig. 3. The output of our salience heuristic on an example “conversation”. These results establish the viability of our approach. We are able to generate a set of reasonable descriptors for unfamiliar gestures without losing the capability to do so for gestures in training classes. Further, we are able to remove contextually irrelevant words from the generated set of descriptors to improve the overall accuracy of the final set of descriptors. This set is useful for understanding the meaning of gestures. 3.4 End-to-End Gesture Understanding We have integrated our system into a real-world robotic platform to test its end-to-end functionality. The experimental setup (pictured in Fig. 4a) was as follows: A human user and a mobile manipulation platform are positioned on opposite sides of a table, facing each other. A set of objects are placed onto the table. The human user makes a request for a particular object, and the manipulation platform must understand the request and grasp the correct object. Critically, the request made by the human contains both verbal and gestural elements, and is ambiguous without consideration of both components in conjunction. Specifically, the verbal component of the request identifies an object by color, but the table holds several objects of the specified color, making the referent object ambiguous. In this case, the gestural component is used to communicate the relative size of the referent object, disambiguating the request. [] Fig. 4. The unfamiliar gesture understanding system integrated with the Optimus mobile manipulation platform. For this experiment, our system was made to run online and integrated with the verbal understanding and manipulation components of the overall platform. Thus, although this experiment is fairly simple in terms of the gestures, it serves to demonstrate the viability of our system for use in robotic applications. In future work, we intend to measure the impact of our unfamiliar gesture understanding system on the overall understanding capability of a robot participating in a collaborative task with a human. We plan to run the entire system (as detailed above), on a Rethink Robotics Baxter robot. We will be able to capture the empirical performance of our system in a realistic scenario by using Baxter to perform an object identification task. We will run trials in which a human user will be asked to indicate to Baxter the object which they wish to obtain (e.g. with an ambiguous phrase such as “the red one” and an accompanying gesture to indicate that, of the available red objects, they mean a hammer). We will assess Baxter’s performance at identifying the correct object both in the presence and absence of gesture to better quantify the contribution of our system’s abilities. As we are aware of no direct baselines (i.e. no other systems capable of performing zero-shot learning on gestures), we will compare our system to the current state of the art in gesture recognition and natural language understanding (e.g. [12, 24–26]), trained on the same data as we use to train our system. We will post the results of this experiment to our project site¹. 3.5 Multimodal Corpus Collection A dearth of multimodal data limits the development of algorithms for situated gesture and language understanding. Guyon et al. [8] and Escalera et al. [4] have provided a good starting point, but we see possible improvement in areas such as the artificial nature of the gestures contained (i.e., the performers were instructed to gesture) and the dataset’s focus on beat and emblematic gestures. We have begun to conduct an experiment to collect a new gestural dataset for use in training our model and eventually for public release. Participants in the experiment are placed in a room with the study organizer. The room contains two tables, one for the participant and the other for the organizer. The table for the organizer holds a small blind, under which a piece of origami paper is placed. The participant is given a set of intentionally vague instructions for folding origami. They are told that the instructions have been algorithmically generated, and that we wish to test their correctness and interpretability. By concealing the true purpose, this pretense ensures that the gestures produced are natural. The participant is asked to convey the directions for constructing the origami to the organizer, using any speech or gestures desired, but without showing the organizer their instructions. The participant’s speech and gestures are recorded by microphones and Kinect sensors. We have captured gestures from approximately 15 participants in this manner. Most sessions result in a large number of gestures describing the physical properties of the origami being folded: shapes, relative sizes, and fold structures (i.e. the direction and placement of a fold) are the concepts most commonly communicated through gesture. We are continuing to collect data, and hope to record a minimum of 50 participants before concluding the study. We will be releasing the collected data on our project site[$$^{1}$$]. The recordings from each trial will be transcribed and processed to extract the skeletal data of each participant. These transcriptions will be annotated with timing information. To ensure the anonymity of the study participants, we will release only the annotated transcripts and skeletal data for each trial to the public. We believe that this combination of data is sufficient to make the dataset useful for experiments in gestural understanding, linguistics, and other fields. We hope that the completed dataset will have both immediate direct impact and longer-term indirect impact. The obvious benefit of the study is that it provides us with more data for training. By increasing both the quantity and quality of our training data, we hope to be able to attain better performance at the unfamiliar gestures task. More broadly, however, the collected dataset will enable further studies to be conducted by both our lab and other researchers. The dataset is intentionally general—nothing in its framing or collection is inherently robotics-specific. This generality makes the dataset potentially interesting to researchers across the fields of psychology, computer vision, machine learning, HCI, HRI, and general robotics. The data collected are realistic, as participants are kept oblivious of the true purpose of the study, and no special effort is made to elicit or force gestures. While the task is artificial, it still represents a realistic example of a collaborative problem-solving task. This means that it may be of interest to researchers in areas entirely separate from the topic of gesture, such as group dynamics and sociology. 4 Conclusions The largest weakness of our unfamiliar gesture understanding system is the lack of data suitable for use in training of the system. We are seeking to rectify this deficiency through our aforementioned data collection experiment; however, this experiment has not yet been concluded. This lack of data has limited evaluations of our system thus far to relatively simple applications. Even so, we are able to draw some conclusions about the performance and properties of our system. First, it is apparent that the performance of the unfamiliar gesture understanding system is predicated on the quality of the word embedding space it uses. In the most basic sense, the word embedding must contain mappings for words which could reasonably be used to describe any gesture that the system hopes to be able to understand. We do not yet have a means of determining a threshold for suitability, which means that entirely unrelated words may be returned for a gesture in the absence of sufficiently many relevant words. Although our salience heuristic is designed to remove irrelevant words, it cannot determine if a word is relevant to a particular gesture, but only to a context. More subtly, we rely on the word embedding space placing similar words close to each other. While this property often holds, it is not universally true. Related to these issues is the tradeoff between coverage and comprehensibility. In other words, if the word embedding space contains more words and thus has better coverage, it may have lower comprehensibility, because it is more probable that an unrelated word will be closer to the point at which a gesture is embedded. Second, we see room for improvement in experimentation with both the sub-gestural feature representation and the architecture of the neural network used to compute the aligned gesture embedding. In the latter case, we have experimented with the number of layers in the multi-layer perceptron component of the network, but the dearth of data available for training means that we quickly succumb to overfitting as more layers are added. In the former case, although our current approximation does a reasonable job of capturing small motions corresponding to sub-gestures, our intuition for sub-gestures suggests that they are not all of uniform or bounded duration, and thus that a more adaptive segmentation approach may have greater success. Acknowledgements This material is based upon research supported by the Office of Naval Research under Award Number N00014-16-1-2080. We are grateful for this support. References 1. Artzi, Y., Zettlemoyer, L.: UW SPF: The University of Washington Semantic Parsing Framework (2013) 2. Chen, Q., Georganas, N.D., Petriu, E.M.: Real-time vision-based hand gesture recognition using haar-like features. In: Instrumentation and Measurement Technology Conference Proceedings, IMTC 2007, pp. 1–6. IEEE, May 2007. doi:10.​1109/​IMTC.​2007.​379068 3. Eldon, M., Whitney, D., Tellex, S.: Interpreting Multimodal Referring Expressions in Real Time (2015). https://​edge.​edx.​org/​assetv1:​Brown+CSCI2951-K+2015\_​T2+type@asset+block@eldon15.​pdf 4. Escalera, S., et al.: Chalearn multi-modal gesture recognition 2013: grand challenge and workshop summary. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, pp. 365–368. ACM (2013) 5. Escalera, S., et al.: Multi-modal gesture recognition challenge 2013: dataset and results. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, pp. 445–452. ACM (2013) 6. Gawron, P., et al.: Eigengestures for natural human computer interface. arXiv:​1105.​1293 [cs] 103, pp. 49–56 (2011). doi:10.​1007/​978-3-642-23169-8\_​6, http://​arxiv.​org/​abs/​1105.​1293. Accessed 29 Oct 2015 7. Ge, S.S., Yang, Y., Lee, T.H.: Hand gesture recognition and tracking based on distributed locally linear embedding. Image Vis. Comput. 26(12), 1607–1620 (2008). ISSN: 0262-8856. doi:10.​1016/​j.​imavis.​2008.​03.​004, http://​www.​sciencedirect.​com/​science/​article/​pii/​S026288560800069​3. Accessed 18 Nov 2015 8. Guyon, I., et al.: The ChaLearn gesture dataset (CGD 2011). Mach. Vis. Appl. 25(8), 1929–1951 (2014). ISSN 0932-8092, 1432-1769. doi:10.​1007/​s00138-014-0596-3, http://​link.​springer.​com/​article/​10.​1007/​s00138-014-0596-3. Accessed 02 Mar 2016 9. Huang, C.-M., Mutlu, B.: Modeling and evaluating narrative gestures for humanlike robots. In: Robotics: Science and Systems (2013) 10. Jetley, S., et al.: Prototypical Priors: From Improving Classification to Zero- Shot Learning. arXiv:​1512.​01192 [cs] (3 December 2015). http://​arxiv.​org/​abs/​1512.​01192. Accessed 29 Jan 2016 11. Jones, K.S.: A statistical interpretation of term specificity and its application in retrieval. J. Documentation 28, 11–21 (1972)CrossRef 12. Kollar, T., et al.: Generalized grounding graphs: a probabilistic framework for understanding grounded language. In: JAIR (2013). https://​people.​csail.​mit.​edu/​sachih/​home/​wp-content/​uploads/​2014/​04/​G3\_​JAIR.​pdf 13. Kondo, Y.: Body gesture classification based on bag-of-features in frequency domain of motion. In: 2012 IEEE RO-MAN, pp. 386–391 (2012). doi:10.​1109/​ROMAN.​2012.​6343783 14. Luo, D., Ohya, J.: Study on human gesture recognition from moving camera images. In: 2010 IEEE International Conference on Multimedia and Expo (ICME), pp. 274–279, July 2010. doi:10.​1109/​ICME.​2010.​5582998 15. Mikolov, T., et al.: Efficient Estimation of Word Representations in Vector Space. arXiv:​1301.​3781 [cs] (16 January 2013). arXiv:​1301.​3781, http://​arxiv.​org/​abs/​1301.​3781. Accessed 30 Mar 2016 16. Palatucci, M., et al.: Zero-shot learning with semantic output codes. In: Neural Information Processing Systems (NIPS), December 2009 17. Sauppé, A., Mutlu, B.: Robot deictics: how gesture and context shape referential communication. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction, HRI 2014, New York, NY, USA, pp. 342–349. ACM (2014). ISBN 978-1-4503-2658-2. doi:10.​1145/​2559636.​2559657, http://​doi.​acm.​org/​10.​1145/​2559636.​2559657. Accessed 19 Nov 2015 18. Segers, V., Connan, J.: Real-time gesture recognition using eigenvectors (2009). http://​www.​cs.​uwc.​ac.​za/​~jconnan/​publications/​Paper%20​56%20​-%20​Segers.​pdf 19. Socher, R., et al.: Zero-Shot Learning Through Cross-Modal Transfer. arXiv:​ 1301.​3666 [cs] (16 January 2013). arXiv:​1301.​3666, http://​arxiv.​org/​abs/​1301.​3666. Accessed 25 Jan 2016 20. Takano, W., Hamano, S., Nakamura, Y.: Correlated space formation for human whole-body motion primitives and descriptive word labels. Rob. Auton. Syst. 66, 35–43 (2015)CrossRef 21. Mahbub, U., Imtiaz, H.: One-Shot-Learning Gesture Recognition Using Motion History Based Gesture Silhouettes (2013). doi:10.​12792/​iciae2013 22. Wan, J., et al.: One-shot learning gesture recognition from RGB-D data using bag of features. J. Mach. Learn. Res. 14(1), 2549–2582 (2013). ISSN 1532-4435. http://​dl.​acm.​org/​citation.​cfm?​id=​2567709.​2567743. Accessed 25 Jan 2016 23. Di, W., Zhu, F., Shao, L.: One shot learning gesture recognition from RGBD images. In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 7–12. doi:10.​1109/​CVPRW.​2012.​6239179, June 2012 24. Wu, J.: Fusing multi-modal features for gesture recognition. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, ICMI 2013, New York, NY, USA, pp. 453–460. ACM (2013). ISBN 978-1-4503-2129-7. doi:10.​1145/​2522848.​2532589, http://​doi.​acm.​org/​10.​1145/​2522848.​2532589. Accessed 31 Mar 2016 25. Yin, Y., Davis, R.: Gesture spotting and recognition using salience detection and concatenated hidden Markov models. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, ICMI 2013, New York, NY, USA, pp. 489–494. ACM (2013). ISBN: 978-1-4503-2129-7. doi:10.​1145/​2522848.​2532588, http://​doi.​acm.​org/​10.​1145/​2522848.​2532588. Accessed 22 Jan 2016 26. Zhou, Y., et al.: Kernel-based sparse representation for gesture recognition. Pattern Recogn. 46(12), 3208–3222 (2013). ISSN 0031-3203. doi:10.​1016/​j.​patcog.​2013.​06.​007, http://​dx.​doi.​org/​10.​1016/​j.​patcog.​2013. Accessed 29 Jan 2016 Footnotes 1 https://​rpal.​cs.​cornell.​edu/​projects/​unfamiliar-gestures.   © Springer International Publishing AG 2017 Dana Kulić, Yoshihiko Nakamura, Oussama Khatib and Gentiane Venture (eds.)2016 International Symposium on Experimental RoboticsSpringer Proceedings in Advanced Robotics110.1007/978-3-319-50115-4\_74 Erratum Erratum to: Application of Robot Manipulator for Cardiopulmonary Resuscitation Jaesug Jung¹  , Jeeseop Kim¹  , Sanghyun Kim¹  , Woon Yong Kwon²  , Sang Hoon Na^(2, 3  ), Kyung Su Kim²  , Gil Joon Suh²  , Byeong Wook Yoo⁴  , Jin Woo Choi⁴  , Jung Chan Lee^(5, 6, 7  ) and Jaeheung Park^(1, 8  ) (1) Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea (2) Department of Emergency Medicine, Seoul National University College of Medicine, Seoul, South Korea (3) Division of Cardiology, Department of Internal Medicine, Seoul National University College of Medicine, Seoul, South Korea (4) Interdisciplinary Program for Bioengineering, Graduate School, Seoul National University, Seoul, South Korea (5) Department of Biomedical Engineering, Seoul National University College of Medicine, Seoul, South Korea (6) Department of Biomedical Engineering, Seoul National University Hospital, Seoul, South Korea (7) Institute of Medical and Biological Engineering, Medical Research Center, Seoul National University, Seoul, South Korea (8) Advanced Institutes of Convergence Technology, Suwon, Republic of Korea     Jaesug Jung Email: jjs916@snu.ac.kr URL: https://dyros.snu.ac.kr   Jeeseop Kim Email: kpersona1004@snu.ac.kr URL: https://dyros.snu.ac.kr   Sanghyun Kim Email: ggory15@snu.ac.kr URL: https://dyros.snu.ac.kr   Woon Yong Kwon Email: kwy711@hanmail.net URL: https://dyros.snu.ac.kr   Sang Hoon Na Email: nasanghoon@gmail.com URL: https://dyros.snu.ac.kr   Kyung Su Kim Email: kanesu@gmail.com URL: https://dyros.snu.ac.kr   Gil Joon Suh Email: suhgil@snu.ac.kr URL: https://dyros.snu.ac.kr   Byeong Wook Yoo Email: subak@melab.snu.ac.kr URL: https://dyros.snu.ac.kr   Jin Woo Choi Email: jinchoi9@melab.snu.ac.kr URL: https://dyros.snu.ac.kr   Jung Chan Lee Email: ljch@snu.ac.kr URL: https://dyros.snu.ac.kr   Jaeheung Park (Corresponding author) Email: park73@snu.ac.kr URL: https://dyros.snu.ac.kr The updated original online version for this chapter can be found at DOI 10.​1007/​978-3-319-50115-4\_​24 Erratum to: Chapter “Application of Robot Manipulator for Cardiopulmonary Resuscitation” in: D. Kulić et al. (eds.), 2016 International Symposium on Experimental Robotics, Springer Proceedings in Advanced Robotics 1, DOI 10.​1007/​978-3-319-50115-4\_​24 The original version of the book was inadvertently published with the misspelt author name “Kyoung Su Kim” which should be corrected to read as “Kyung Su Kim”. The erratum book has been updated with the change. Author Index A Aksoy, Eren E. Alastuey, Jorge Cañardo Allen, Craig Andersen, Hans Anderson, Jacob Andries, Mihai Ang, Marcelo H. Araki, Brandon Aronson, Reuben M. Asada, Harry Asfour, Tamim Aukes, Daniel M. B Bagnell, J. Andrew Bajcsy, Ruzena Barber, Daniel Basañez, Luis Batts, Zachary Beachly, Evan Belta, Calin Berenson, Dmitry Bergbreiter, Sarah Bestick, Aaron Bhatia, Ankit Bidaud, Philippe Bjelonic, Marko Bobadilla, Leonardo Bodt, Barry Borges, Paulo V.K. Boularias, Abdeslam Bourne, David Bowkett, Joseph Brockers, Roland Brox, Thomas Burdick, Joel W. Burgard, Wolfram Burkhardt, Matt Butterworth, David T. Byl, Katie C Capobianco, Roberto Carlone, Luca Carrillo-Arce, Luis C. Chatila, Raja Chavan-Dafle, Nikhil Chavez-Garcia, R. Omar Chebotar, Yevgen Chen, Xiangyu Cheung, Michael Childers, Marshal Choi, Changhyun Choi, Jin Woo Choudhary, Siddharth Choudhury, Shushman Christensen, Henrik I. Claret, Josep-Arnau Clement, Lee Conte, Gianpaolo Corrales Ramon, Juan-Antonio Correll, Nikolaus Coskun, A. Cox, Nicholas Cristofalo, Eric D D’Souza, Rollen Da Silva, Jimmy Daftry, Shreyansh Daniilidis, Kostas Dean, Robert Del Preto, Joseph Dellaert, Frank Dellin, Christopher M. Delmerico, Jeffrey Desaraju, Vishnu R. Detweiler, Carrick Do, Tien Doherty, Patrick Doherty, S.M. Dragan, Anca D. Dubois, Magda Dubrawski, Artur Dudek, Gregory Duperret, Jeffrey Duvallet, Felix E Eizentals, Peteris Elbaum, Sebastian Eng, You Hong Erdem, Esra F Ferenz, Peter Ficuciello, Fanny Fox, Dieter Fragoso, Anthony Fukui, Rui G Gabler, Volker Galceran, Enric Gambardella, Luca Maria Ghasemlou, S. Giardina, Fabio Gilitschenski, Igor Giusti, Alessandro Grotz, Markus Guggenheim, Jacob Guillame-Bert, Mathieu Gurău, Corina Guy, Aymeric H Hamill, Scott Hansen, Johanna Harada, Akinori Harada, Kensuke Harding, Matthew Hata, Alberto Yukinobu Hausman, Karol Hebert, Martial Heckman, Christoffer R. Higgins, James Hinzmann, Timo Hirai, Shinichi Ho, Van Anh Hoarau, Antoine Hockman, Benjamin Homberger, Timon Howard, Thomas M. Hsieh, M. Ani Huber, Gerold I Iida, Fumiya Inaba, Masayuki Isler, Volkan J Jagtap, A.S. Jia, Zhenzhong Johansen, Tor Arne Johnson, Aaron M. Joukov, Vlad Jung, Jaesug K Kaiser, Peter Kakiuchi, Yohei Kakodkar, Nikhil Kaminaga, Hiroshi Kanoulas, Dimitrios Kappassov, Zhanat Karasawa, Hiroyuki Karydis, Konstantinos Kawasaki, Koji Keegan, Terence Kelly, Jonathan Khatib, Oussama Kikuchi, Kohei Kim, Jeeseop Kim, Joohyung Kim, Kyung Su Kim, Sanghyun Kimura, Shunsuke King, Jennifer E. Klingbeil, Ellen Knepper, Ross A. Ko, Tianyi Kodama, Yuichi Koditschek, Daniel E. Koganesawa, Koichi Koh, Je-sung Kolbert, Roman Komagata, Mitsuo Kottege, Navinda Koval, Michael C. Kramer, Benjamin Kress-Gazit, Hadas Krizhevsky, Alex Kroemer, Oliver Kulić, Dana Kumar, Vijay Kwon, Woon Yong L Laney, Christian Lawler, Christopher Leahy, Kevin Lebiere, Christian Lee, Connor Lee, Gilwoo Lee, Jung Chan Lennon, Craig Levine, Sergey Li, Changshuo Liu, Zhen Lo, Roger Luce-Vayrac, Pierre M Maeda, Masato Manjanna, Sandeep Mao, Huitan Mason, Matthew T. Masumura, Ryo Matsumoto, Yoshio Matthies, Larry Meguenani, Anis Menon, Samir Michael, Nathan Mizui, Masahiko Modasshir, M. Montijano, Eduardo Mueggler, Elias Mulgaonkar, Yash N Na, Sang Hoon Nagata, Kazuyuki Nakamura, Yoshihiko Nakao, Masayuki Nardi, Daniele Navarro-Serment, Luis Nesnas, Issa A.D. Neumann, Gerhard Nieto, Carlos Nobre, Fernando Nomura, Soichiro Nouman, Ahmed O O’Kane, J.M. Oguz, Ozgur S. Oh, Jean Oka, Koichi Okada, Kei Okubo, Takuro Oliveira, Gabriel L. Onda, Hiromu Osa, Takayuki Otte, Michael P Padois, Vincent Parietti, Federico Park, Jaeheung Park, Sangdon Pastor, Peter Patel, Radhen Patoglu, Volkan Pavone, Marco Peele, Bryan Pendleton, Scott Drew Penskiy, Ivan Perdereau, Véronique Peretroukhin, Valentin Peters, Jan Pinto, José Pinto, Lerrel Pohorecky, Sarah Posner, Ingmar Poulakakis, Ioannis Py, Frédéric Q Quattrini Li, Alberto Quillen, Deirdre R Rahman, S. Rajan, Kanna Ramos, Fabio Tozeto Reid, Robert G. Rekleitis, Ioannis Richter, Charles Rodriguez, Alberto Rogers, John Romero, Oscar Rothrock, Brandon Roumeliotis, Stergios I. Roy, Nicholas Roy, Pravakar Rudol, Piotr Rus, Daniela S Sato, Shunsuke Scaramuzza, Davide Schaal, Stefan Schenck, Connor Schwager, Mac Shen, Shaojie Shepherd, Robert F. Shi, Jianbo Sibley, Gabe T. Siciliano, Bruno Siegwart, Roland Silva, Mónica A. Singh, A. Sirken, Aaron Smith, Collin Smith, Ryan N. Sousa, João Sovero, Sebastian Srinivasa, Siddhartha S. Stager, Adam Stastny, Thomas Stentz, Anthony Su, Kunyue Suh, Gil Joon Sukhatme, Gaurav S. Suppe, Arne Swift, Tim T Talele, Nihar Tamamoto, Takumi Tanaka, Kaori Tang, Sarah Tanner, Herbert G. Teng, Zhou Thackston, Allison Thomas, Justin Thomason, Wil Tolley, Michael T. Tong, Chi Hay Treers, Laura Tsagarakis, Nikos G. Tsuji, Tokuo Turkseven, Melih Twidwell, Dirac U Ueda, Jun V Valada, Abhinav Vasile, Cristian-Ioan Velagapudi, Prasanna Venkatraman, Arun Venture, Gentiane Vinokurov, Jerry W Wakita, Yujin Walter, Matthew R. Wan, Weiwei Westwater, Max Wolf, Denis Fernando Wollherr, Dirk Wood, Robert J. Wu, Kejian J. Wzorek, Marius X Xanthidis, M. Xiao, Jing Y Yalciner, Ibrahim Faruk Yamamoto, Ikuo Yamane, Katsu Yao, John W. Yokoi, Kazuhito Yoo, Byeong Wook Yorita, Satoshi Yoshikawa, Masahiro Z Zaccara, Damiano Zhao, Moju Zhou, Zhehua Zhu, Menglong ISER 2016 Tokyo Japan 20161003 20161006
05c195d3-8eec-4b08-9578-4382e7bd244c
trentmkelly/LessWrong-43k
LessWrong
Thinking on the page “Thinking on the page” is a handle that I’ve found useful in improving my writing (and my introspection more generally). When I write, for the most part, I’m trying to put something that I already feel is true into words. But when I think on the page, the words are getting ahead of my internal sense of what’s true. I’m writing something that just sounds good, or even just something that logically flows from what I’ve already written but has come untethered from my internal sense. It’s kind of a generalized verbal overshadowing. I don’t think this is challenging only to people who think [of themselves as thinking] non-verbally, considering how much more universal are experiences like “this puts exactly what I believe into words better than I ever could” or even the satisfaction of finding a word on the tip of the tongue. Some people seem to be better than others not just at describing their internal sense of truth, but at tapping into it at all. But if you think only in internal monologue, you may have a very different perspective on “thinking on the page”—I’d be interested to hear about it. At best, this is what happens in what Terry Tao calls the “rigorous” stage of mathematics education, writing, “The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition.” At worst, it’s argument based on wordplay. Thinking on the page can be vital when you’re working beyond your intuition, but it’s equally vital to notice that you’re doing it. If what you’re writing doesn’t correspond to your internal sense of what’s true, is that because you’re using your words wrong, or because you need to use the page as prosthetic working memory to build a picture that can inform your internal sense? The two places this becomes clearest for me are in academic writing and in art critique. Jargon has the effect of constantly pulling me back towards the page. If it doesn’t describe a native concept, I
2821cebf-6dd2-40e2-868d-67f7f4ad8e30
trentmkelly/LessWrong-43k
LessWrong
Prioritizing Parental Sleep Overall I've really enjoyed being a parent, but poor sleep has been the hardest part. At times we've both been so tired that we weren't able to think clearly about how to fix the problem, which can be very tricky. We've figured this out more over time, however, and sleep has improved with each successive kid (n=3). Here's the main things that have worked for us, all in one place. Note that kids and parents vary a lot: our three kids are different from each other, and your kids likely even more so. I'm hoping that many things on this list will be useful to many people, but it wouldn't be surprising for several of them to be a poor fit for any individual family. I've ordered them roughly down from the ones that I think are most likely to work for anyone who tries them. Perhaps unsurprisingly this is also roughly in the order of youngest to oldest; my impression is kids diverge more over time as their personalities come out. * Sleeping in multiple rooms. If the baby waking up means only interrupting one parents sleep that's about half as much sleep deprivation. * Blackout curtains. Small children typically wake up with the sun, which means you're up to. If you can keep their room solidly dark in the morning by blocking out the sun you can shift their schedule to whenever is most convenient for you, typically a later waking. * Using a Snoo automated bassinet. It gently rocks the baby and shushes them back to sleep when they wake. We used this with our youngest for the first six months. It worked very well, and automated almost all of the helping babies fall back asleep that I needed to do with the older two. We weaned her off it much more gradually than they recommended, with first weaning mode (stops rocking after the baby is asleep) and then running it without rocking. * Sleep training. At some point they no longer need to feed as often, but either don't know how to fall back asleep on their own at the end of a sleep cycle or would enjoy getting a cuddle be
77f40d88-2df8-4e03-b157-9c845cfc35a8
StampyAI/alignment-research-dataset/arbital
Arbital
Rational number The rational numbers are either whole numbers or fractions of whole numbers, like $0,$ $1$, $2$, $\frac{1}{2}$, $\frac{97}{3}$, $-17$, $\frac{-85}{1993},$ and so on. The [set](https://arbital.com/p/3jz) of rational numbers is written $\mathbb Q.$ [Irrational numbers](https://arbital.com/p/54z) like [$\pi$](https://arbital.com/p/49r) and [$e$](https://arbital.com/p/e) are _not_ included in $\mathbb Q;$ the rational numbers are only those numbers which can be written as $\frac{a}{b}$ for integers $a$ and $b$ (where $b \neq 0$). Formally, $\mathbb{Q}$ is the [https://arbital.com/p/-3gz](https://arbital.com/p/-3gz) of the [https://arbital.com/p/-field_of_fractions](https://arbital.com/p/-field_of_fractions) of $\mathbb{Z}$ (the [ring](https://arbital.com/p/3gq) of [integers](https://arbital.com/p/48l)). That is, each $q \in \mathbb Q$ is an expression $\frac{a}{b}$, where $b$ is a nonzero integer and $a$ is an integer, together with certain rules for addition and multiplication. The rational numbers are the last intermediate stage on the way to constructing the [real numbers](https://arbital.com/p/4bc), but they are also very interesting and important in their own right. One intuition about the rational numbers is that once we've created the [real numbers](https://arbital.com/p/4bc), then a real number $x$ is a rational number if and only if it may be written as $\frac{a}{b}$, where $a, b$ are integers and $b$ is not $0$. # Examples - The integer $1$ is a rational number, because it may be written as $\frac{1}{1}$ (or, indeed, as $\frac{2}{2}$ or $\frac{-1}{-1}$, or $\frac{a}{a}$ for any nonzero integer $a$). - The number [$\pi$](https://arbital.com/p/49r) is not rational ([proof](https://arbital.com/p/513)). - The number $\sqrt{2}$ (being the unique positive real which, when multiplied by itself, yields $2$) is not rational ([proof](https://arbital.com/p/548)). # Properties - There are infinitely many rationals. (Indeed, every integer is rational, because the integer $n$ may be written as $\frac{n}{1}$, and there are infinitely many integers.) - There are countably many rationals ([proof](https://arbital.com/p/511)). Therefore, because there are [uncountably](https://arbital.com/p/2w0) many real numbers, [https://arbital.com/p/-almost_all](https://arbital.com/p/-almost_all) real numbers are not rational. - The rationals are [dense](https://arbital.com/p/dense_metric_space) in the reals. - The rationals form a [field](https://arbital.com/p/481) ([proof](https://arbital.com/p/4zr)). Indeed, they are a subfield of the real numbers. # Construction Instead of taking the reals and selecting a certain collection which we label the "rationals", [it is possible to construct the rationals](https://arbital.com/p/construction_of_complexes_from_naturals) given access only to the natural numbers; and from the rationals we may construct the reals. In some sense, this approach is cleaner than starting with the reals and producing the rationals, because the natural numbers are very intuitive objects but the real numbers are less so. We can be closer to satisfying some deep existential unease if we can build the reals out of the much-simpler naturals. As an analogy, being able to produce a block of wood given access to a wooden table is much less satisfying than the other way round, and we run into blocks of wood "in the wild" so we are pretty convinced that there actually are such things as blocks of wood. On the other hand, we almost never see wooden tables in nature, so we can't be quite as sure that they're real until we've built one ourselves. Similarly, everyone recognises broadly what a counting number is, and they're out there in the wild, but the rational numbers are somewhat less "natural" and their existence is less intuitive.
dbeaf690-938a-459b-ac1b-d20a5e76021f
trentmkelly/LessWrong-43k
LessWrong
[Review] Meta-Honesty (Ben Pace, Dec 2019) [Edit: I'm going to re-write this at some point, I don't think I managed to say much of anything very clearly. I've removed the first section.] The essay takes the same starting assumption as I do. Honesty is deadly important and something you work hard at. There are some edge cases though, and Eliezer tries to make a simple rule that captures a lot of them. Eliezer's general approach is not case-based as mine is, but more law-based, where he looks for general rules, and considers individual cases insofar as they provide him with forced-moves from the rule-based perspective. The things that motivate him aren't individual experiences, but are: * The fact that it’s easier to always say things that are technically true if you’ve got high verbal intelligence, and that this means requiring absolute honesty is unfairly easier for some than for others. * The massive number of counterfactual versions of you who would like you to keep plausible deniability when answering questions like “What did you do last night?”. * The case of hiding Jews in the attic and answering to Nazis who ask if you are hiding Jews. * Robin Hanson's problem of automatic norms, where people judge others for not immediately knowing their own norms. This induces (amongst many things) some self-doubt about whether your norms are as obviously correct as you’ve been thinking. Instead of coming up with bits of advice here and there, he comes up with a new rule.  Eliezer's rule is fairly straightforward. He acknowledges that, yes, there are situations where even a very honest person will fail to be honest. In line with what I've said above, there are many edge cases. And so he simply suggests that on top of trying hard to be honest throughout life, you should be absolutely honest about where you'll likely be honest and dishonest. Here’s how he puts it. > Be at least as honest as an unusually honest person. Furthermore, when somebody asks for it and especially when you believe they're asking for
508331d4-63e6-4039-bdf5-c192c90f9ce0
trentmkelly/LessWrong-43k
LessWrong
Confounded No Longer: Insights from 'All of Statistics' ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, > Using fancy tools like neural nets, boosting and support vector machines without understanding basic statistics is like doing brain surgery before knowing how to use a bandaid. > Larry Wasserman Foreword For some reason, statistics always seemed somewhat disjoint from the rest of math, more akin to a bunch of tools than a rigorous, carefully-constructed framework. I am here to atone for my foolishness. This academic term started with a jolt - I quickly realized that I was missing quite a few prerequisites for the Bayesian Statistics course in which I had enrolled, and that good ol' AP Stats wasn't gonna cut it. I threw myself at All of Statistics, doing a good number of exercises, dissolving confusion wherever I could find it, and making sure I could turn each concept around and make sense of it from multiple perspectives. I then went even further, challenging myself during the bits of downtime throughout my day to do things like explain variance from first principles, starting from the sample space, walking through random variables and expectation - without help. All of Statistics 1: Introduction 2: Probability In which sample spaces are formalized. 3: Random Variables In which random variables are detailed and a multitude of distributions are introduced. Conjugate Variables Consider that a random variable X is a function X:Ω→R. For random variables X,Y, we can then produce conjugate random variables XY,X+Y, with (XY)(ω)=X(ω)Y(ω)(X+Y)(ω)=X(ω)+Y(ω). 4: Expectation Evidence Preservation E(E(Y|X))=E(Y) is conservation of expected evidence (thanks to Alex Mennen for making this connection explicit). Marginal Variance V(Y)=EV(Y|X)+VE(Y|X) > Why does marginal variance have two terms? Shouldn't the expected conditional variance be sufficient? This literally plagued my dreams. Proof (of the variance; I cannot prove it plagu
ec35bcb6-7b5e-4a79-acf7-b27a6dbbc786
trentmkelly/LessWrong-43k
LessWrong
Using the universal prior for logical uncertainty (retracted) (I'm not sure about any of this) (Yup! Vanessa Kowalski pointed out a crucial error, see her comment below. The idea about the universal prior picking up all computable regularities is unproved. I'm leaving the post as it was, but please read it as a historical document only.) Abram Demski had an old paper describing a naive way to use the universal prior for logical uncertainty. He pointed out some problems with it, but I'm not sure the problems are real. I think the approach will work surprisingly well. The idea is that, since the universal prior is just a probability distribution over bit sequences, we don't have to use it to predict bits one by one. We can also update it on facts like "the seventh bit is 1" without mentioning the previous six bits. To choose which bits to update on, we can mechanically search through all possible proofs in Peano arithmetic (PA) by increasing length. Whenever we prove a sentence with Gödel number N, we tell the prior that the Nth bit is 1; whenever we disprove one, we tell the prior that the bit is 0. After each update, the prior's opinions about all other bits (and thus all other sentences in PA) will shift, hopefully in a meaningful way. It's easy to see that the probability of any provable sentence will converge to 1, because at some point it becomes 1 by fiat, and disprovable sentences will likewise go to 0. How fast is the convergence? I think it will be extremely fast, far outpacing our slow proof search and subsuming any possible human or computer reasoning about PA - even though the prior knows nothing about PA and only receives opaque bits. To see why, recall that the universal prior can pick up any computable regularities. For example, if you feed it a sequence where all even-numbered bits are 1 and odd-numbered bits are hard to predict, it will quickly become very certain that all future even-numbered bits will be 1. Similarly, if there's some decidable set of Gödel numbers that for some reason correspond to prova
3b1b7b4a-4c5b-485d-b1d2-3edf194defa3
trentmkelly/LessWrong-43k
LessWrong
One night, without sleep In memory of all those who fell and were not saved (I normally have little regard for trigger warnings, but on this occasion, imagine that my words are prefaced with every trigger warning ever) A body, on a battlefield, maimed and bleeding; terrible pain, staring at the sky, left to die. An experience that must have happened a million times in human history, an experience that must be happening somewhere right now. My situation is better - perhaps. I lie, not on a battlefield, but in a sickbed. I just have some kind of flu; and an eye haunted by a migraine that comes, and goes, and threatens to come again; and I am not sleeping. Also, I can write. The smartphone battery was almost dead, I thought I was resigned to simply lying awake for long hours, without the catharsis of expression; but enough time passed that I stirred, reached for the laptop, hauled it onto the bed, plugged in the phone. . The monotony of describing mundane acts has removed me from the experiences that impelled me to write. Those experiences said: no one should have to endure anything like this; life should not be created. But it was not just sensation that tortured me. It was the defeat of my will, not just now, but many times. It was all the lost opportunities to create in my own life, the pointless obstruction, and being left to rust, that denied everyone the benefits of what I might have made, that negated my rare attempt to actually fix, and transform this malign existence. . The sun is still far from rising, but my mind has stirred to something like wakefulness. Possible words now queue for attention and selection, the hubbub of daylight communication, rather than crystallizing in the dark, a single phrase that repeats and repeats and repeats that it not be forgotten. And I have remembered another thought: that I am so tired of this. Of having to endure pain, whole days lost to waiting for pain to fade, in order to keep carting my burdens uphill, alone. O
790a8ee0-0b0b-46af-9eb0-db15cfd8ccbe
trentmkelly/LessWrong-43k
LessWrong
Boring & straightforward trauma explanation I thought of this a couple years ago and figured it was so obvious that it wasn't worth posting about, but people are still discussing trauma endlessly, and I have not seen an explanation written anywhere, so here's this. "Trauma" is a bad experience deemed anomalous. It means "the world is not usually like that". We do not call any behavior or emotional pattern "trauma" if it is obviously adaptive. If you are in a war and you lie down and panic when you hear loud bangs, nobody will call you traumatized. When the war is over and you panic at fireworks, people will say you are traumatized. If then a bomb lands nearby and you survive because you took cover, they'll say you were smart and acted fast. If 10% of your country got murdered a generation or two ago for not going with the political majority, then you are completely sane for shutting off the thinky brain in politics. (The USSR, China, Korea, Vietnam, Nigeria, Sudan, Afghanistan, Cambodia, Ethiopia, Lebanon...) If that won't happen where you live now, it's "intergenerational trauma". If you got raped (and half your friends did too) and you are anxious and distrustful, then maybe you are just correctly calibrated about your own social sphere. If it was in a different country 10 years ago, then it's trauma. If you got ostracized and called creepy in school, you might have a good idea of what actually happens when you ask girls out in public. If your classmates were jerks and you don't have acne anymore, it's trauma. If you got emotionally beat up by all your exes, then being less open might be a good idea. If they were all cocaine addicts from the same town, it's trauma. If driving a car makes you want to throw up since your last accident, you might be a bad driver. If a helicopter landed on you, it's trauma. (I have heard it said that if you're worried about your driving then you're probably a safe driver. The last person I met who was seriously worried about their driving totaled her car a week later. Ev
d20ab53d-fbec-443d-81a7-b163ac7d9b8d
trentmkelly/LessWrong-43k
LessWrong
“The Era of Experience” has an unsolved technical alignment problem Every now and then, some AI luminaries * (1) propose that the future of powerful AI will be reinforcement learning agents—an algorithm class that in many ways has more in common with MuZero (2019) than with LLMs; and * (2) propose that the technical problem of making these powerful future AIs follow human commands and/or care about human welfare—as opposed to, y’know, the Terminator thing—is a straightforward problem that they already know how to solve, at least in broad outline. I agree with (1) and strenuously disagree with (2). The last time I saw something like this, I responded by writing: LeCun’s “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem. Well, now we have a second entry in the series, with the new preprint book chapter “Welcome to the Era of Experience” by reinforcement learning pioneers David Silver & Richard Sutton. The authors propose that “a new generation of agents will acquire superhuman capabilities by learning predominantly from experience”, in some ways like a throwback to 2018. Again, I agree with this part. Then later on, they talk about AI motivations, with the following desideratum: > …a general-purpose AI that can be steered reliably towards arbitrary user-desired behaviours… They sketch a plan for making that desideratum actually happen. My post outline is: * In Section 1, I will describe their plan for making it such that their reinforcement learning (RL) agents “can be steered reliably towards arbitrary user-desired behaviours”; * In Section 2, I’ll explain why we should expect their plan to fail, for deep reasons that cannot be easily patched. Instead, the plan would lead to a powerful AI that’s a bit like a human sociopath, with callous indifference to whether humans (including its own programmers and users) live or die. It will act cooperative when acting cooperative is in its selfish best interest, and stab you in the back the moment that changes. For context, I have been wo
4ba011f1-f281-44e0-a825-6a537f0295c7
trentmkelly/LessWrong-43k
LessWrong
Dennett's heterophenomenology In an earlier comment, I conflated heterophenomenology in the general sense of taking introspective accounts as data to be explained rather than direct readouts of the truth, with Dennett's particular approach to explaining those data.  So to correct myself, I say that it is Dennett, rather than heterophenomenology, that claims that there is no such thing as consciousness. Dennett denies that he does, but I disagree. I defend this view here. I have to admit at this point that I have not read "Consciousness Explained".  Had either of the library's copies been on the shelves last Tuesday I would have done by now, but instead I found his later book (and his most recent on the topic), "Sweet Dreams: Philosophical Obstacles to a Science of Consciousness".  The subtitle suggests a drawing back from the confidence of the earlier title, as does that of the book in between.  The book confirms me in my impression that the ideas of "C.E." have been in the air so long (the air of hard SF, sciblogs, and the like, not to mention Phil Goetz's recent posts) that reading the primary source 19 years on would be nothing more than an exercise in checkbox-ticking. I'll give a brief run-through of "Sweet Dreams" and then carry on the argument. The book is primarily writing against, a response to objections arising from earlier works. In chapter 1 he shoots down the "Zombic Hunch", the idea that a being could be physically identical to a human, but lack consciousness, and therefore that consciousness must be non-physical.  I'll take it that we all agree the zombie story is insane. Chapter 2 introduces the concept of heterophenomenology.  This has already been introduced to LW. The corpse of the Zombic Hunch got up again and walked, so in Chapter 3 he shoots it down again. Chapters 4 and 5 attack qualia.  They don't exist, says Dennett, because of such anomalies as change blindness, Capgras syndrome, and various thought-experiments that defy all coherent accounts of what qualia are.
4c8ac142-407a-4333-a1a6-3645ef7340d3
trentmkelly/LessWrong-43k
LessWrong
Short essays on various things I've watched Sometimes I write things in places that aren't here. Sometimes I think those things are worth preserving. Some of those things follow, with minor editing, mostly on the subject of various movies that I've watched. Also two stage musicals, one TV show, one short story, and one music video. They were written over the past four years, so I can't necessarily talk intelligently about these things any more. When I name a thing, spoilers generally follow. I don't really get The Hurt Locker It seemed like it was going to be a film about a cowboy with a death wish, who should be removed from duty but isn't because of institutional dysfunction in the US army. Or something along those lines. Instead it turned out to be… a film about a cowboy with a death wish? Like there's that bit where he's just been a fucking idiot who could have gotten everybody killed (not very specific so far) And someone who I got the impression outranks him comes up to him like «oh you seem like hot shit. Just how hot shit are you? Yeah that's pretty hot shit» in a tone that I swear is subtextually «I'm about to rip you a new one» and then the scene just ends. No one gets ripped a new anything. What? His team mates realise how dangerous he is to be around. But they don't do anything about it, just get pissed at him. And also he's supposedly defused almost a thousand bombs. There's tension there, a question of how he hasn't died yet (did he used to be good at his job and recently something happened to make him so reckless?) but the film doesn't acknowledge it, it doesn't seem to think "being a fucking dangerous idiot cowboy" and "successfully defusing almost a thousand bombs" are at all incompatible? This was a really well regarded movie. It won six Oscars, including best picture, best screenplay and best director. Part of me would be inclined to chalk that up to politics, but the movie isn't even especially political from what I could tell; I'm pretty sure it could have been made both more po
e3b9f57d-97dd-44d3-9ecc-e88d95c80da2
trentmkelly/LessWrong-43k
LessWrong
Self-Responsibility Note: This is meant to be legible to high school students who are not LessWrong regulars. > If you'd asked me in high school what the difference was between high school kids and adults, I'd have said it was that adults had to earn a living. Wrong. It's that adults take responsibility for themselves. Making a living is only a small part of it. Far more important is to take intellectual responsibility for oneself. > ―Paul Graham, What You'll Wish You'd Known Intro I go to a public high school. I've observed the same things Paul Graham has. Students typically allow themselves to follow the system wherever it takes them, without pausing to think about whether their goals are best achieved by exclusively putting energy into a college application for four years.[1] Because of this, it is rare to see people pursue self-responsibility. This post is meant to sketch out the details of what being self-responsible looks like, why highschoolers should start being self-responsible, and what to do once you've taken responsibility for yourself. Self-responsibility Self-responsibility is understanding that you are the only person who can give you the life you want. Self-responsibility looks like spending time writing down what you want to fix in your life, and then brainstorming how you can make positive changes. Self-responsibility looks like having the audacity to ask people for what you want. Self-responsibility looks like realizing that it takes effort to improve yourself, and creating a plan to learn skills and develop character traits you want to have. Tony Stark in a cave. Mark Watney trapped on Mars. This is the type of self-responsibility I'm talking about. Unlike those two, failing to become self-responsible probably won't kill you. There just won't be anyone trying to help you reach your goals in life. Self-awareness When I ask people what they want to change in their life, they usually have to think about it; it's not a question people ask themselves. We don't of
c4128c09-1d81-42f9-a296-56f5b1ce99d4
trentmkelly/LessWrong-43k
LessWrong
The Ultimate Sleeping Beauty Problem I got into a heated debate a couple days ago with some of my (math grad student) colleagues about the Sleeping Beauty Problem. Out of this discussion came the following thought experiment: Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: She will be put to sleep. During the experiment, Beauty will be wakened, interviewed, and put back to sleep with an amnesia-inducing anti-aging drug that makes her forget that awakening. A fair coin will be tossed until it comes up heads to determine which experimental procedure to undertake: if the coin takes n flips to come up heads, Beauty will be wakened and interviewed exactly 3^n times. Any time Sleeping Beauty is wakened and interviewed, she is asked, "What is your subjective probability now that the coin was flipped an even number of times?" I will defer my analysis to the comments.
79f0325f-56a1-4ae4-a71d-28f2bd495641
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Can we simulate human evolution to create a somewhat aligned AGI? Cross-post [from LessWrong](https://www.lesswrong.com/posts/qdmfKbEaZeiZs3buq/can-we-simulate-human-evolution-to-create-a-somewhat-aligned). *Epistemic status: less than 20% chance that this is a good idea. I've spent about 25 hours thinking about this.* **TL;DR: If AI alignment is intractable, but human evolution is robust in producing something close to human values, we could try to simulate/mimic human evolution to create a superintelligent successor AI.  This plan would have various problems, especially training competitiveness.** Many people I know put significant credence in the following three statements: * AI alignment is very difficult, and has a <<50% chance of success, largely because we have no idea how to put human values into an AI. * Even if AI alignment is intractable, we know of one process that definitely creates general intelligence with human values: the evolution and cultural development of humans. * Medium-sized perturbations to the human evolutionary process wouldn't destroy human values. I have heard MIRI people guess that an AI aligned with elephants, or with aliens that evolved in a similar environment to humans ("humanlike aliens"), would be much better than an AI with a random goal chosen through SGD, perhaps 50% as good as an AI aligned with human [CEV](https://www.lesswrong.com/tag/coherent-extrapolated-volition). If you're one of these people, this post describes a different approach to making the future go well which you might find plausible. While alignment requires some way to specify human values and put them into an AI, a successor species plan requires creating a system that generates values similar to ours, from scratch, by some process similar to human evolution. The resulting superintelligence will not be totally aligned with human values. However, if the plan is sufficiently tractable while alignment is intractable, it may have higher expected value than aiming for aligned AI. I believe that such plans are neglected because many AI pessimists are uncomfortable thinking about this backup plan. How to create a successor AI by simulating evolution? ----------------------------------------------------- If we had infinite compute and perfect knowledge of the evolutionary history of humans, we could just simulate the evolution of humans and end up with simulated humans, who could then either become superintelligent through self-modification, or solve the AI alignment problem by thinking for thousands of subjective years, likely achieving human CEV. If we had infinite compute but not perfect knowledge, we could execute [Paul Christiano's plan](https://ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e) of simulating the entire evolutionary process of some intelligent aliens, and achieve humanlike-alien CEV. Sadly, we have neither infinite compute nor perfect knowledge. In practice, plans to create a successor species should start by deciding which features involved in the evolutionary and cultural processes leading up to current human values are necessary to generate values at least ~50% as good as human CEV ("values-important"). For example, maybe it's necessary that brain architecture is determined by genes, that it's possible to infer others' mental states by interrogating them, or that humans pair-bond allowing romantic love. If there are few enough values-important features, we can design a training procedure with as many of the values-important features as possible, but which is still training-competitive. This will hopefully get us agents, which I refer to as "simulated aliens", which are somewhat aligned with humans. Here's my sketch of a path towards a successor AI: 1. Find likely values-important features in human evolution. (This step seems hard, but likely easier than alignment). 2. Alignment looks intractable, so we develop a procedure (e.g. a series of RL environments) that replicates the evolution of social beings, that includes the most likely values-important features. 3. We get further evidence that alignment is intractable, so we implement the training procedure. 4. As the simulated agents approach human capability levels, we might modify or selectively breed them to be slightly closer to human values. 5. The simulated agents evolve/train into superintelligences, break out of the simulation, and take over Earth before humans can develop a misaligned AI through other means. Will this actually work? ------------------------ A few things have to be true for a successor AI plan to work, and I analyze each of these below. 1. The CEV of humanlike aliens must be substantially closer to our values than unaligned goals, such that we think it's at least ~50% as good as creating an aligned AGI (where the baseline is an unaligned AGI). That is, human values must not be sensitive to the differences between human evolution and an infinite-compute simulation of human evolution based on our best knowledge. 2. Developing this plan is more tractable than alignment, assuming it is possible. 3. The training process must approximate alien CEV (and there's no interaction between the error here and the error from (1) that destroys all the value.) 4. The plan is sufficiently training-competitive. 5. The agent that is deployed doesn't create an unaligned AGI of its own. 6. The plan does not involve humans developing dangerous capabilities that lead to unaligned AGI. Overall, it seems fairly unlikely that a plan like this works, so I'm mostly posting this so that people can start thinking in the direction of related, better plans, or think about the underlying moral questions. ### 1. Aren't human values fragile? Complexity and fragility of value have been written about on LW since Eliezer's [values](https://www.lesswrong.com/s/3HyeNiEpvbQQaqeoH) [sequences](https://www.lesswrong.com/s/9bvAELWc8y2gYjRav): > **Complexity of value** is the thesis that human values have high[Kolmogorov complexity](https://wiki.lesswrong.com/wiki/Kolmogorov_complexity); that our[preferences](https://wiki.lesswrong.com/wiki/preferences), the things we care about, cannot be summed by a few simple rules, or compressed.[**Fragility of value**](https://www.lesswrong.com/lw/y3/value_is_fragile/) is the thesis that losing even a small part of the rules that make up our values could lead to results that most of us would now consider as unacceptable (just like dialing nine out of ten phone digits correctly does not connect you to a person 90% similar to your friend). For example, all of our values *except* novelty might yield a future full of individuals replaying only one optimal experience through all eternity. > > [(from the LW wiki)](https://www.lesswrong.com/tag/complexity-of-value) I think the successor AI plan is consistent with a weak version of value fragility, something like "our CEV cannot be summed up by a few simple rules". My view is roughly that while *an actual list of the things valued by human CEV* is fragile (removing one small piece like novelty can remove most of the value of the future), the *process that produced human values* is not necessarily fragile (making a small change to the evolutionary and cultural processes that created our values might retain most of the value of the future). I put significant credence on the possibility that reaching 50% of the value of human CEV requires fewer than 100 values-important features of human evolution. I consider the moral questions here very important, but I'm very confused about them and detailed reasoning about this position is outside the scope of this post. However, a stronger value-fragility thesis could turn out to be true; if you take the Kolmogorov complexity claim in the quote literally, then human values cannot be compressed into any short program, even if the program is "simulate evolution with the top 100 value-important features". Maybe human values depend on a large number of incidental steps in evolution that we can't possibly identify, in which case your intuition that alien or elephant CEV is ok would be wrong, and this plan would be doomed.   ### 2. Will this be more tractable than alignment? Unknown; it seems hard to identify which features of human evolution are important in creating human values, but AI pessimists claim that we have no idea how to solve alignment either. In any case, if we disregard plans for partially aligned successor AI, and continue to frame the problem of maximizing the expected value of the future as maximizing the probability of fully solving alignment, we could be leaving value on the table.   ### 3. Will the training process approximate alien CEV? After we identify features of evolution that might be necessary to yield humanlike values, we need to actually design the training process, and decide which features to keep vs discard. Many features will probably add inefficiencies into the training process, so we will have to include enough value-important features that the AI is alien-aligned, but not so many that the plan is impractically uncompetitive. If you're really pessimistic about coordination, then maybe you think the version of the tradeoff we're heading for is that none of the value-important features of evolution are simulated and none of alien CEV is retained. However, it might be possible to do better than alien CEV in some respects. Depending on the amount of compute and coordination we have, we might be able to do selective breeding on the last few generations for inclination to cooperate with aliens, wide moral circles, caution about AGI-like activities, etc. If performed for a short period at the end before they surpass humans, deception is probably not a huge concern. In an optimistic scenario, this has as much selection pressure as the process that turned wolves into dogs (thousands of bits), and so we only need the generators of humanlike values to *not be vanishingly unlikely* in the space of agents generated by the simulation.   ### 4. Will this be competitive? Probably not.  First, note that simulating the evolution of an entire species is uncompetitive by [many orders of magnitude](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#heading=h.cebevgwmadke). But real life evolution is extremely inefficient at creating intelligence in ways we can immediately fix: * Humans take ~20 years to grow up and produce children, while in simulation we can probably download memories onto newborn simulated aliens; * Humans must carry around an entire body with ~50 times the mass of the brain; * To prevent inbreeding depression, any isolated population must have >500 individuals; * (speculative) Evolution and SGD might be similar optimization algorithms in that they optimize locally, with the biggest difference being that evolution operates on the L1 norm, so we can probably replace evolution with steepest descent on the L1 norm and possibly by SGD. These inefficiencies can all be removed in a simulation, so I'm confident that we can do basically the same thing as evolution with several orders of magnitude less compute. This is likely still not competitive enough, but we might find more competitive plans if the most compute-intensive parts of human evolution turn out to not be values-important. For example, perhaps at subhuman capability level, the aliens' values will become crystallized and we can train the aliens from elephant-level towards superintelligence using standard RL techniques. That said, this plan will probably have a competitiveness disadvantage compared to totally unaligned AI, so we would have to have good enough coordination to make the successor AI. For example, if the successor AI plan takes 3x more compute than totally unaligned AGI and is technically simple, we might be able to coordinate around creating the successor AI anyway. If it's 1000x, maybe not.   ### 5. Won't the simulated aliens create unaligned AGI? The successor AI is already an artificial superintelligence. It is possible that it will need to solve a version of the alignment problem itself, but this doesn't seem like an issue: if a superintelligence can't solve alignment, then we couldn't either. And the simulated aliens probably won't develop their own misaligned AIs before becoming superintelligent, because: * we can warn them that the alignment problem is hard * they can likely more easily self-modify to become superintelligent than design AGI from scratch * we can construct the environment so they don't have an AI hardware overhang, or economic incentives towards AI capabilities * we can stop the simulation if we see them trying to develop AGIs ### 6. Won't implementing this plan require dangerous capabilities? It would be irresponsible to develop new, more capable RL architectures just for the successor AI plan. Other elements of the plan, like research into values-important features of human evolution, seem fine; and if the training procedure can be adapted to new RL architectures easily, only the actual implementation of the plan will involve cutting-edge capabilities. However, the risk from implementation seems large; due to the uncompetitiveness of the training process, the risk seems somewhat worse than the risk of whole brain emulation causing unaligned neuromorphic AI, which is already quite large. If we're really lucky, there could be an RL architecture uniquely good at creating a successor species but that does not advance AGI timelines, but I wouldn't count on it.  --- *Thanks to Tamera Lanham, Sydney Von Arx, Malo Bourgon, John Wentworth, Oliver Habryka, Jack Ryan, Drake Thomas, and others for helpful feedback.*
47fb88f9-2775-45bd-9115-7e37baaf508c
trentmkelly/LessWrong-43k
LessWrong
Tactical Nuclear Weapons Aren't Cost-Effective Compared to Precision Artillery Epistemic status: there is probably a US Army analysis that does this much better.   Tactical nuclear weapons (TNWs) are often times portrayed as being significantly more effective than conventional artillery. My research indicates that they are actually less effective than precision weapons of similar cost. These have an average deviation of 2m with GPS guidance, far lower than their kill radius. I've seen people wondering why Russia wasn't using S-400 anti-air systems to intercept HIMARS strikes, with some reasoning that Ukraine was using them in conjunction with unguided rockets to make them difficult to identify. The real reason is economics. S-400 systems are something like $400 million (!!) per system (8 launchers + radar + control systems + 72 missile package). This comes to something like $5 million per interceptor launched. In contrast, HIMARS are $3.5 million per launcher + reloader system. The procurement cost for GMLRS systems, used by M270 and HIMARS, is around $150,000 per M30 rocket. M30s don't need decoys, since any interception system is going to be more expensive than it is. Compare the cost of an M30 to a tactical nuclear warhead. The B61-7 warhead (300-400kt) costs around $28 million to build, and a lot to maintain. It also needs a delivery system, which is going to increase the price even more. Let's assume the most optimistic kill radius of 8km, which Nuke Map gives as the 100% 3rd degree burn distance for unprotected civilians. The kill-radius against armored vehicles is going to be far less (probably less than 2km for a ground-burst). In reality, troops don't neatly fit themselves into an 8km circle. They tend to spread themselves into a line. The vast majority of the killing power of a nuclear weapon is wasted against a line of troops. In order to kill a tank with a tactical nuke, it needs to either be destroyed by the air blast (it's a 60-ton tank, a detrack is probably the best you get), or literally boil off the metal to the point tha
edf1e81c-e079-4a10-8c03-2fa78ce28cef
trentmkelly/LessWrong-43k
LessWrong
Asymptotic Logical Uncertainty: Solomonoff Induction Inspired Approach This is post is part of the Asymptotic Logical Uncertainty series. This is about a failed attempt to satisfy the Benford test. I will not prove things about this approach. The only reason I am including this approach is because the fact that it does not work is surprising and insightful. Let Geometric(12) be the function which outputs n∈N with probability 2−m. Fix a prefix-free encoding of probabilistic Turing machines, and let ℓ(M) denote the number of bits used to encode M. Let RTM(k) be the function which outputs the Turing machine M with probability proportional to 2−kμ(M). Fix E, a deterministic Turing Machine (for example E=L). Fix a time complexity function T(n) and another time complexity function R(n)∈o(T(n)logT(n)). Consider the following algorithm. SIA(E,T,R) 1: n=0 2: N=1 3: M=RTM(3) 4: g=Geometric(1/2) 5: loop 6: if 4^n*R(N)*log(R(N))*N<T(N) then 7: run E for one more time step 8: if E outputs another bit then 9: n=n+1 10: repeat 11: M=RTM(3) 12: g=Geometric(1/2) 13: until M^{gR}(i)=E(i) for all i<=n 14: output M^{gR}(N) 15: N=N+1 This code is inspired by Solomonoff Induction. We cannot directly apply Solomonoff induction, because the lack of a time bound. Solomonoff induction would pick out the program E, but that will not allow us to predict E quickly. We have to restrict the run times of the programs to ensure that they can compute their Nth bit in time T(N). When trying to predict E(N), we compute the first n values of E for some n much smaller than N. We then sample probabilistic Turing machines until we find one that quickly gets all of these first n bits correct. We use that Turing machine to compute our guess at E(N). The purpose of the geometric random variable g is to make the time we allow the sampled Turing machines to run more flexible. The sampled Turing machines can take any amount of time in O(T(i)) but get an extra penalty fo
b5faf455-f949-496c-953a-06014d4a58e3
trentmkelly/LessWrong-43k
LessWrong
Benford Test with Gaps In this post, I present a new desirable property in Asymptotic Logical Uncertainty. The property is the analog of satisfying the Benford test, but with a collection of undecidable sentences mixed in. I do not know of any algorithm that satisfies this property yet. I give a proof that a similar stronger property is actually impossible to satisfy. Consider an algorithm M which on input n runs in f(n) time, and outputs a probability interpreted as the probability to assign to ϕn. We say that a subset of natural numbers S is (p,q)-irreducible pattern if it is possible to quickly determine if n is in S, and the provability/disprovability of elements of S appears (relative to f(n) time) indistinguishable from a coin which outputs "true" with probability p, "false" with probability q, and "undecidable" with probability 1−p−q. I am purposefully being vague about the exact definition of irreducible pattern here. You could define it similarly to how we defined it before, but I am open to other definitions. It should at least be the case that if the theorem prover was randomly determining whether elements of the sequence are provable, disprovable, or undecidable with the corresponding probabilities, then the resulting sequence should almost surely be a (p,q)-irreducible pattern. We say that M passes the Benford test with gaps if whenever S is a (p,q)-irreducible pattern, the limit as n∈S goes to infinity of the distance between M(n) and the interval [p,1−q] is 0. At first, this property may appear weaker than it has to be. One might be tempted to instead require that the probability converges to pp+q. However, this property would be too strong. To see that this is too strong, consider an environment in which the nth sentence is true with probability 1/3, false with probability 1/3, and otherwise undecidable. We would need that with probability 1, the probabilities converge to 1/2. In fact, take a random sequence like this, and change finitely many of the undecidable sen
c9ca914a-2b67-4799-89de-9233457f41a3
trentmkelly/LessWrong-43k
LessWrong
Biased AI heuistics Heuristics have a bad rep on Less Wrong, but some people are keen to point out how useful they can sometimes be. One major critique of the "Superintelligence" thesis, is that it presents an abstract, Bayesian view of intelligence that ignores the practicalities of bounded rationality. This trend of thought raises some other concerns, though. What if we could produce an AI of extremely high capabilities, but riven with huge numbers of heuristics? If these were human heuristics, then we might have a chance of of understanding and addressing them, but what if they weren't? What if the AI has an underconfidence bias, and tended to chance its views too fast? Now, that one is probably quite easy to detect (unlike many that we would not have a clue about), but what if it wasn't consistent across areas and types of new information? In that case, our ability to predict or control what the AI does may be very limited. We can understand human biases and heuristics pretty well, and we can understand idealised agents, but differently biased agents might be a big problem.
82afd83e-adbf-4345-9f94-62e08d952b73
trentmkelly/LessWrong-43k
LessWrong
Understanding information cascades Meta: Because we think understanding info cascades are important, we recently spent ~10 hours trying to figure out how to quantitatively model them, and have contributed our thinking as answers below. While we currently didn't have the time to continue exploring, we wanted to experiment with seeing how much the LW community could together build on top of our preliminary search, so we’ve put up a basic prize for more work and tried to structure the work around a couple of open questions. This is an experiment! We’re looking forward to reading any of your contributions to the topic, including things like summaries of existing literature and building out new models of the domain. Background Consider the following situation: > Bob is wondering whether a certain protein injures the skeletal muscle of patients with a rare disease. He finds a handful papers with some evidence for the claim (and some with evidence against it), so he simply states the claim in his paper, with some caution, and adds that as a citation. Later, Alice comes across Bob’s paper and sees the cited claim, and she proceeds to cite Bob, but without tracing the citation trail back to the original evidence. This keeps happening, in various shapes and forms, and after a while a literature of hundreds of papers builds up where it’s common knowledge that β amyloid injures the skeletal muscle of patients with inclusion body myositis -- without the claim having accumulated any more evidence. (This real example was taken from Greenberg, 2009, which is a case study of this event.) An information-cascade occurs when people update on each others beliefs, rather than sharing the causes of those beliefs, and those beliefs end up with a vestige of support that far outstrips the evidence for them. Satvik Beri might describe this as the problem of only sharing the outputs of your thinking process, not your inputs. The dynamics here are perhaps reminiscent of those underlying various failures of collective ration
0fb8c2de-b530-4e04-8ffd-5e3671fcae60
trentmkelly/LessWrong-43k
LessWrong
CFAR website launched The new Center for Applied Rationality website has launched! We'll be adding content as time goes by. Let us know if you find broken links, etc.
a823fde5-6451-46c8-9453-08f7e434fa14
StampyAI/alignment-research-dataset/agisf
AGI Safety Fund
ML Systems Will Have Weird Failure Modes Previously, I've argued that future ML systems might exhibit [unfamiliar, emergent capabilities](https://bounded-regret.ghost.io/p/1527e9dd-c48d-4941-9b14-4f7293318d5c/), and that thought experiments [provide one approach](https://bounded-regret.ghost.io/p/a2d733a7-108a-4587-97fb-db90f66ce030/) towards predicting these capabilities and their consequences. In this post I’ll describe a particular thought experiment in detail. We’ll see that taking thought experiments seriously often surfaces future risks that seem "weird" and alien from the point of view of current systems. I’ll also describe how I tend to engage with these thought experiments: I usually start out intuitively skeptical, but when I reflect on emergent behavior I find that some (but not all) of the skepticism goes away. The remaining skepticism comes from ways that the thought experiment clashes with the ontology of neural networks, and I’ll describe the approaches I usually take to address this and generate actionable takeaways. Thought Experiment: Deceptive Alignment --------------------------------------- Recall that the [optimization anchor](https://bounded-regret.ghost.io/p/a2d733a7-108a-4587-97fb-db90f66ce030/) runs the thought experiment of assuming that an ML agent is a perfect optimizer (with respect to some "intrinsic" reward function $R$). I’m going to examine one implication of this assumption, in the context of an agent being trained based on some "extrinsic" reward function $R^\*$ (which is provided by the system designer and not equal to $R$). Specifically, consider a training process where in step $t$, a model has parameters $\theta\_t$ and generates an action $a\_t$ (its output on that training step, e.g. an attempted backflip assuming it is being trained to do backflips). The action $a\_t$ is then judged according to the extrinsic reward function $R^\*$, and the parameters are updated to some new value $\theta\_{t+1}$ that are intended to increase $a\_{t+1}$'s value under $R^\*$. At some point, the model is then deployed with final parameters $\theta\_T$, and continues to take actions. The following diagram illustrates this process: ![deception1](https://bounded-regret.ghost.io/content/images/2022/01/deception1.png) Now, let’s assume that the model $\theta\_t$ is a perfect optimizer whose objective is to maximize the discounted value of an intrinsic reward $R \neq R^\*$. That is, $\theta\_t$ picks the action $a\_t$ satisfying $a\_t = \operatorname{argmax}\_{a} \mathbb{E}[\sum\_{s=0}^{\infty} \gamma^{-s} R(a\_{t+s}) \mid a\_t = a]$. (I know that this is an unrealistic assumption. We’ll examine the assumption in detail in the next section, but for now please grant it even if it requires suspending disbelief.) What action $a\_t$ will $\theta\_t$ pick? Let’s define $a^R = \operatorname{argmax}\_{a} R(a)$ and $a^\* = \operatorname{argmax}\_{a} R^\*(a)$--that is, $a^R$ maximizes the (instantaneous) intrinsic reward while $a^\*$ maximizes the extrinsic reward. Assuming that $\gamma$ is close to 1, the model wants $a^R$ to be picked on as many future rounds as possible (including at deployment time). If $\theta\_t$ plays myopically, this means it would pick $a\_t = a^R$. However, this is not the optimal non-myopic action: if $a\_t = a^R$, then $\theta\_{t+1}$ will be updated towards producing outputs less like $a^R$ and more like $a^\*$, so outputting $a^R$ in the current round leads to fewer $a^R$ outputs in future rounds. Instead, I think the optimal non-myopic action is approximately the following: during training time, output $a^\*$ on every time step, so that the training process doesn’t update $\theta$ (since it’s already optimal) and hence $\theta\_t = \theta\_{t+1} = \cdots = \theta\_T$. This guarantees that the deployed agent is $\theta\_t$, which is now free to output $a^R$ on every time step without risk of being modified by the training process.[[1]](#fn1) The myopic and non-myopic outcomes are illustrated in the figure below: ![](https://bounded-regret.ghost.io/content/images/2022/01/deception2.png) *Myopic and non-myopic agents with an intrinsic reward, $R$. The myopic agent plays high intrinsic-reward actions during training, but is eventually modified to output action $a^\*$ with high extrinsic reward $R^\*$. The non-myopic agent instead plays $a^\*$ during training so that it can pursue its intrinsic reward at deployment, by playing $a^R$ at all post-deployment time steps.* To put it another way–the model’s behavior will look aligned with $R^\*$ during the training process, but will switch to being non-aligned once deployed: so-called "deceptive alignment". This behavior is unintuitive to me, since it shows that a system’s behavior can shift significantly between train and test, even in the absence of an obvious distribution shift between train and test outputs[[2]](#fn2). Engaging with Deceptive Alignment --------------------------------- When I first heard the above argument, I thought it was pretty crazy and implausible, mainly because my intuition said this "just wasn't how ML systems worked". When I think about why I feel that way, I realize it’s because the scenario invokes capabilities that ML is currently bad at: long-term planning and understanding complex features of the environment (i.e. the training process and its ramifications). However, emergence implies that these properties could easily appear in the future, even without explicit design[[3]](#fn3). As a result, I’ve come to discount this particular intuition. However, I do think there are subtler reasons to think the deceptive alignment story won’t play out as written. Here are a few: 1. It’s not clear why the model $\theta$ would come to be optimizing a reward function $R$ in the first place. Yes, it is the case that deceptively aligned models achieve the global minimum of training loss, so in that sense they are incentivized by the training process. But so is an actually aligned model, so which one you end up with has to depend on the inductive bias of the training process. 2. Reward functions are simpler than policies and typically learned faster. So by the time the system is smart enough to have long-term plans, it will already have a very good representation of its intended reward function. We thus might hope that most of the model's internal representations are devoted to achieving high reward in a straightforward manner rather than through long-term deception. 3. To the extent that a model is not aligned, it probably won’t be the case that it's deceptively aligned with an explicit reward function R---that's a very specific type of agent and most agents (including humans) are not maximizing any reward function, except in the trivial sense of "assign reward 1 to whatever it was going to do anyway, and 0 to everything else". 4. Deceptive alignment is a specific complex story about the future, and complex stories are almost always wrong. I find these points persuasive for showing that deceptive alignment *as explicitly written* is not that likely, but they also don't imply that there's nothing to worry about. Mostly they are an argument that your system might be aligned and might be misaligned, that if it is misaligned it won’t be *exactly* in the form of deceptive alignment, but ultimately what you get depends on inductive bias in an unknown way. This isn't particularly reassuring. **What I take away from thought experiments.** Per the discussion above, the failure mode in my head is not "deceptive alignment as written above". Instead it’s "something kind of like the story above but probably different in lots of details". This makes it harder to reason about, but I think there are still some useful takeaways: * After thinking about deceptive alignment, I am more interested in supervising a model’s process (rather than just its outputs), since there are many models that achieve low training error but generalize catastrophically. One possible approach is to supervise the latent representations using e.g. interpretability methods. * While I don't think neural nets will be literal optimizers, I do think it’s likely that they will exhibit "drives", in the same way that humans exhibit drives like hunger, curiosity, desire for social approval, etc. that lead them to engage in long-term coherent plans. This seems like enough to create similar problems to deceptive alignment, so I am now more interested in understanding such drives and how they arise. * Since deceptive alignment is a type of "out-of-distribution" behavior (based on the difference between train and deployment), it has renewed my interest in understanding whether larger models become more brittle OOD. So far the empirical evidence is in [the opposite direction](https://arxiv.org/abs/2006.16241?ref=bounded-regret.ghost.io), but deceptive alignment is an argument that asymptotically we might expect the trend to flip, especially for tasks with large output spaces (e.g. policies, language, or code) where "drives" can more easily manifest. So to summarize my takeaways: be more interested in interpretability (especially as it relates to training latent representations), try to identify and study "drives" of ML systems, and look harder for examples where larger models have worse OOD behavior (possibly focusing on high-dimensional output spaces). **Other weird failures.** Other weird failures that I think don’t get enough attention, even though I also don’t think they will play out as written, are Hubinger et al.'s *[Risks from Learned Optimization](https://arxiv.org/abs/1906.01820?ref=bounded-regret.ghost.io)* (AI acquires an "inner objective", somewhat similar to deceptive alignment), and Part I of Paul Christiano’s [AI failure story](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like?ref=bounded-regret.ghost.io) (the world becomes very complicated and AI systems create elaborate Potemkin villages for humans). Paul Christiano’s story in particular has made me more interested in understanding how reward hacking interacts with the sophistication of the supervisor: For instance, how much more readily do neural networks fool humans who have 5 seconds to think, vs. 2 minutes or 30 minutes? I more generally want to understand how reward hacking depends quantitatively on both supervision quality and model capacity (qualitatively, we expect higher quality $\to$ less hacking and higher capacity $\to$ more hacking). Understanding this quantitative relation would help ground Paul’s story, since he imagines a world where humans have built extremely sophisticated systems for supervising ML models, but eventually the ML models become even more powerful and game the supervision signal anyways. What To Do About Weird Emergent Failures ---------------------------------------- When thinking about how to handle emergent risks, I often reflect on the example of uranium. For context, an atomic bomb is pretty much just a bunch of uranium put together---once you get enough, the reaction becomes self-sustaining---making it a good example of More Is Different. The first nuclear reaction (not a bomb, but a [pile of uranium](https://en.wikipedia.org/wiki/Chicago_Pile-1?ref=bounded-regret.ghost.io) in an abandoned football stadium in Chicago) was engineered by Enrico Fermi. The reaction required 12,400 pounds of uranium metal piled 57 layers high. Left unsupervised, a 57-layer pile would consume itself within two hours and kill everyone in the vicinity. On the other hand, a 56-layer pile would do nothing. Fermi had a good understanding of nuclear physics and understood, from careful monitoring and underlying theory, that the pile would pass the critical threshold between layers 56 and 57. He also knew that cadmium rods would absorb neutrons and strongly inhibit the reaction. These rods were set up and the entire apparatus was carefully controlled to go only slightly supercritical. He brought the reaction to half a watt for several minutes before shutting it back down (see *[The Making of the Atomic Bomb](https://smile.amazon.com/Making-Atomic-Bomb-Richard-Rhodes/dp/1451677618?ref=bounded-regret.ghost.io)*, pp. 524). With AI, we currently lack both Fermi's conceptual understanding of the underlying risk factors and his ability to continuously measure them. We have neither a cadmium rod nor a measure of reaction criticality. But I think we can get there, by combining these weird thought experiments with [carefully chosen empirical experiments](https://bounded-regret.ghost.io/p/74d500d2-a980-4720-984a-c016284ecdc2/), which will be the topic of the next post. --- 1. Things are more complicated in reality, since $\theta\_t$ is updated even when $a\_t$ is optimal (due to noise in the training process). However, we’ll ignore this for purposes of the example. [↩︎](#fnref1) 2. Of course, there is still some distribution shift, since the agent can observe whether it is being trained or deployed. But this is a relatively minor and unintuitive shift compared to what is typically studied. [↩︎](#fnref2) 3. Of course, emergence doesn’t mean that we can just predict whatever we want–we’d need some reason to expect these specific capabilities to emerge. Long-term planning and environmental awareness are both useful for a wide variety of tasks, making them likely to emerge when training powerful models on a diverse data distribution. [↩︎](#fnref3)
f05ca83c-4c95-4010-be34-ab9c4cd20c99
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Survey on AI existential risk scenarios *Cross-posted to [LessWrong](https://www.lesswrong.com/posts/WiXePTj7KeEycbiwK/survey-on-ai-existential-risk-scenarios)*. Summary ======= * In August 2020, we conducted an online survey of prominent AI safety and governance researchers. You can see a copy of the survey at [this link](https://www.guidedtrack.com/programs/u84a3sq/run).[[1]](#fn-kkdLgiSst7biuB5Pa-1) * We sent the survey to 135 researchers at leading AI safety/governance research organisations (including [AI Impacts](https://aiimpacts.org/), [CHAI](https://humancompatible.ai/), [CLR](https://longtermrisk.org/), [CSER](https://www.cser.ac.uk/), [CSET](https://cset.georgetown.edu/), [FHI](https://www.fhi.ox.ac.uk/), [FLI](https://futureoflife.org/), [GCRI](https://gcrinstitute.org/), [MILA](https://mila.quebec/en/), [MIRI](https://intelligence.org/), [Open Philanthropy](https://www.openphilanthropy.org/) and [PAI](https://www.partnershiponai.org/)) and a number of independent researchers. We received 75 responses, a response rate of 56%. * The survey aimed to identify which AI existential risk scenarios[[2]](#fn-kkdLgiSst7biuB5Pa-2) (which we will refer to simply as “risk scenarios”) those researchers find most likely, in order to (1) help with prioritising future work on exploring AI risk scenarios, and (2) facilitate discourse and understanding within the AI safety and governance community, including between researchers who have different views. * In our view, the key result is that there was considerable disagreement among researchers about which risk scenarios are the most likely, and high uncertainty expressed by most individual researchers about their estimates. * This suggests that there is a lot of value in exploring the likelihood of different AI risk scenarios in more detail, especially given the limited scrutiny that most scenarios have received. This could look like: + Fleshing out and analysing the scenarios mentioned in this post which have received less scrutiny. + Doing more horizon scanning or trying to come up with other risk scenarios, and analysing them. * At this time, we are only publishing this abbreviated version of the results. We have a version of the full results that we may publish at a later date. Please contact [one](mailto:sc2260@cam.ac.uk) [of](mailto:jonas.schuett@legalpriorities.org) [us](mailto:alexis.carlier@governance.ai) if you would like access to this, and include a sentence on why the results would be helpful or what you intend to use them for. * We welcome feedback on any aspects of the survey. Motivation ========== It has been argued that AI could pose an existential risk. The original risk scenarios were described by [Nick Bostrom](https://global.oup.com/academic/product/superintelligence-9780199678112) and [Eliezer Yudkowsky](https://intelligence.org/files/AIPosNegFactor.pdf). More recently, these [have](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw) [been](https://sideways-view.com/2018/02/24/takeoff-speeds/) [criticised](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf), and [a](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) [number](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) [of](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact) [alternative](https://longtermrisk.org/research-agenda) [scenarios](https://www.alignmentforum.org/posts/HTgakSs6JpnogD6c2/two-neglected-problems-in-human-ai-safety) [have](https://www.alignmentforum.org/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty) [been](https://www.alignmentforum.org/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) [proposed](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic). There has been some useful work exploring these alternative scenarios, but much of this is informal. Most pieces are only presented as blog posts, with neither the detail of a book, nor the rigour of a peer-reviewed publication. For further discussion of this dynamic, see work by [Ben](https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff) [Garfinkel](https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/), [Richard](https://www.alignmentforum.org/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety) [Ngo](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) and [Tom Adamczewski](https://fragile-credences.github.io/prioritising-ai/). The result is that it is no longer clear which AI risk scenarios experts find most plausible. We think this state of affairs is unsatisfactory for at least two reasons. First, since many of the proposed scenarios seem underdeveloped, there is room for further work analyzing them in more detail. But this is time-consuming and there are a wide range of scenarios that *could* be analysed, so knowing which scenarios leading experts find most plausible is useful for prioritising this work. Second, since the views of top researchers will influence the views of the broader AI safety and governance community, it is important to make the full spectrum of views more widely available. The survey is intended to be a first step in this direction. The survey ========== We asked researchers to estimate the probability of five AI risk scenarios, *conditional on an existential catastrophe due to AI having occurred.* There was also a catch-all “other scenarios” option. These were the five scenarios we asked about, and the descriptions we gave in the survey: * "**Superintelligence**" + A single AI system with goals that are hostile to humanity quickly becomes sufficiently capable for complete world domination, and causes the future to contain very little of what we value, as described in “[Superintelligence](https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies)”.[[3]](#fn-kkdLgiSst7biuB5Pa-3) * **Part 2 of “[What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)”** + This involves multiple AIs accidentally being trained to seek influence, and then failing catastrophically once they are sufficiently capable, causing humans to become extinct or otherwise permanently lose all influence over the future. * **Part 1 of “[What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)”** + This involves AIs pursuing easy-to-measure goals, rather than the goals humans actually care about, causing us to permanently lose some influence over the future (excluding cases where the “Superintelligence” scenario or Part 2 of “What failure looks like” also occur). * **War** + Some kind of war between humans, exacerbated by developments in AI, causes an existential catastrophe. AI is a significant risk factor in the catastrophe, such that no catastrophe would be occurred without the developments in AI. The proximate cause of the catastrophe is the deliberate actions of humans, such as the use of AI-enabled, nuclear or other weapons. See Dafoe ([2018](https://www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf)) for more detail. * **Misuse** + Intentional misuse of AI by one or more actors causes an existential catastrophe (excluding cases where the catastrophe was caused by misuse in a war that would not have occurred without developments in AI). See Karnofsky ([2016](https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity)) for more detail. We chose these five scenarios because they have been most prominent in [previous](https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/) [discussions](https://fragile-credences.github.io/prioritising-ai/) about different AI risk scenarios. For more details about the survey, you can find a copy of it at [this link](https://www.guidedtrack.com/programs/u84a3sq/run). Key results =========== There was considerable disagreement among researchers about which risk scenarios are most likely ------------------------------------------------------------------------------------------------ * If you take the median response for each scenario and compare them, those (conditional) probabilities are fairly similar (between 10% and 12.5% for the five given scenarios, and 20% for “other scenarios”).[[4]](#fn-kkdLgiSst7biuB5Pa-4) However, *individual responses* vary greatly (from the median). For instance, most respondents thought at least one scenario was quite unlikely: + 96% of respondents assigned ≤10% (conditional) probability to at least one scenario. + 89% of respondents assigned ≤10% (conditional) probability to at least two scenarios. + 64% of respondents assigned ≤10% (conditional) probability to at least three scenarios.[[5]](#fn-kkdLgiSst7biuB5Pa-5) * There were a number of outliers: for each scenario, at least one respondent estimated them to have ≥70% (conditional) probability. * For each scenario (including “other scenarios”), the [mean absolute deviation](https://en.wikipedia.org/wiki/Average_absolute_deviation#Mean_absolute_deviation_around_the_mean) of responses was somewhere between 9% and 18%. + E.g. for the “Superintelligence” scenario, the mean absolute deviation was 13%. This means that the average (absolute) distance from the mean estimate was 13 percentage points. + To help interpret this: recall that the means are all between 15% and 25% (see footnote 4) - so the mean absolute deviations are relatively large compared to (conditional) probabilities themselves.[[6]](#fn-kkdLgiSst7biuB5Pa-6) * For each scenario (including “other scenarios”), the interquartile range of responses was somewhere between 15% and 31%. + E.g. for the “Superintelligence” scenario, the first quartile response was 5% and the third quartile response was 20% (so the interquartile range was 15%). * These statistics suggest considerable disagreement among researchers about which risk scenarios are the most likely. Researchers are uncertain about which risk scenarios are most likely -------------------------------------------------------------------- The median self-reported confidence level given by respondents was 2, on a seven point Likert scale from 0 to 6, where: * Confidence level 0 was labelled “completely uncertain, I selected my answers randomly”, and * Confidence level 6 was labelled “completely certain, like probability estimates for a fair dice”. Researchers put substantial credence on “other scenarios” --------------------------------------------------------- The “other scenarios” option had the highest median probability, at 20%. Some researchers left free-form comments describing these other scenarios. Most of them have seen no public write-up, and the others have been explored in less detail than the five scenarios we asked about. Key takeaway ============ Together, these three results suggest that there is a lot of value in exploring the likelihood of different risk scenarios in more detail. This could look like: * Fleshing out and analysing the scenarios mentioned in this post, in more detail. + This seems especially important given that the median and mean probability estimates were similar for all the scenarios, and yet the “Superintelligence” scenario has received far more scrutiny than the others. * Doing more horizon scanning or trying to come up with other risk scenarios, and analysing them. + Other than the “Superintelligence” scenario, almost all other risk scenarios (including those mentioned in this post) were only made salient in the last four years or so. Recent “failure stories” by [Andrew Critch](https://www.alignmentforum.org/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) and [Paul Christiano](https://www.alignmentforum.org/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story) - which seem to have been well-received and appreciated - also suggest that there is value in exploring different risk scenarios in more detail. Likewise, Rohin Shah [advocates](https://ssconlinemeetup.substack.com/p/video-from-rohins-talk) for this kind of work, and AI Impacts has recently compiled a [collection of stories](https://aiimpacts.org/partially-plausible-fictional-ai-futures/#Stories_intended_as_AI_futurism) to clarify, explore or appreciate possible future AI scenarios. Caveats ======= One important caveat is the tractability of exploring the likelihood of different AI risk scenarios in more detail. The existence of considerable disagreement, despite there having been some attempts to clarify and discuss these issues, could suggest that making progress on this is difficult. However, we think there has been relatively little effort towards this kind of work so far, and that there is still a lot of low-hanging fruit. Additionally, there were a number of limitations in the survey design, which are summarised in [this document](https://docs.google.com/document/d/1qRq_3cUgIF4QXHtg0owXJPYBoZP6imdMGZc4S44iYds/edit?usp=sharing). If we were to run the survey again, we would do many things differently. Whilst we think that our main findings stand up to these limitations, we nonetheless advise taking them cautiously, and as just one piece of evidence - among many - about researchers’ views on AI risk. Other notable results ===================== Most of this community’s discussion about existential risk from AI focuses on scenarios involving one or more powerful, misaligned AI systems that take control of the future. This kind of concern is articulated most prominently in “Superintelligence” and “[What failure looks like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like)”, corresponding to three scenarios in our survey (the “Superintelligence” scenario, part 1 and part 2 of “What failure looks like”). The median respondent’s total (conditional) probability on these three scenarios was 50%, suggesting that this kind of concern about AI risk is still prevalent, but far from the only kind of risk that researchers are concerned about today. 69% of respondents reported that they have *lowered* their probability estimate in the “Superintelligence” scenario (as described above)[[7]](#fn-kkdLgiSst7biuB5Pa-7) since the first year they were involved in AI safety/governance. This may be because they now assign relatively higher probabilities to other risk scenarios happening first, and not necessarily because they think that fast takeoff or other premises of the “Superintelligence” scenario are less plausible than they originally did. Full version ============ At this time, we are only publishing this abbreviated version of the results. We have a version of the full results that we may publish at a later date. Please contact [one](mailto:sc2260@cam.ac.uk) [of](mailto:jonas.schuett@legalpriorities.org) [us](mailto:alexis.carlier@governance.ai) if you would like access to this, and include a sentence on why the results would be helpful or what you intend to use them for. Acknowledgements ================ *We would like to thank all researchers who participated in the survey. We are also grateful for valuable comments and feedback from JJ Hepburn, Richard Ngo, Ben Garfinkel, Max Daniel, Rohin Shah, Jess Whittlestone, Rafe Kennedy, Spencer Greenberg, Linda Linsefors, David Manheim, Ross Gruetzemacher, Adam Shimi, Markus Anderljung, Chris McDonald, David Kreuger, Paolo Bova, Vael Gates, Michael Aird, Lewis Hammond, Alex Holness-Tofts, Nicholas Goldowsky-Dill, the GovAI team, the AI:FAR group, and anyone else we ought to have mentioned here. This project grew out of [AISC](https://aisafety.camp/) and [FHI SRF](https://www.fhi.ox.ac.uk/summer-research-fellowship/). All errors are our own.* --- 1. We will not look at any responses from now on; this is intended just to show what questions were asked, and in case any readers are interested in thinking through their own responses. [↩︎](#fnref-kkdLgiSst7biuB5Pa-1) 2. AI existential risk scenarios are sometimes called [threat models](https://www.alignmentforum.org/tag/threat-models). [↩︎](#fnref-kkdLgiSst7biuB5Pa-2) 3. Bostrom describes many scenarios in the book “Superintelligence”. We think that this scenario is the one that most people remember from the book, but nonetheless, we think it was probably a mistake to refer to this particular scenario by this name. [↩︎](#fnref-kkdLgiSst7biuB5Pa-3) 4. Likewise, the mean responses for the five given scenarios are all between 15% and 18%, and the mean response for “other scenarios” was 25%. [↩︎](#fnref-kkdLgiSst7biuB5Pa-4) 5. Other similar results: 77% of respondents assigned ≤5% (conditional) probability to at least one scenario; 51% of respondents assigned ≤5% (conditional) probability to at least two scenarios. [↩︎](#fnref-kkdLgiSst7biuB5Pa-5) 6. For another way of interpreting this, consider that if respondents were evenly split into six completely “polarised” camps, each of which put 100% probability on one option and 0% on the others, then the mean absolute deviation for each scenario would be ~28%. [↩︎](#fnref-kkdLgiSst7biuB5Pa-6) 7. As per footnote 3, the particular scenario we are referring to here is not the only scenario described in “Superintelligence”. [↩︎](#fnref-kkdLgiSst7biuB5Pa-7)
846198fa-75f7-4273-82bb-22152371f7e7
trentmkelly/LessWrong-43k
LessWrong
Zero-Knowledge Cooperation A lot of ink has been spilled about how to get various decision algorithms to cooperate with each other. However, most approaches require the algorithm to provide some kind of information about itself to a potential cooperator. Consider FDT, the hot new kid on the block. In order for FDT to cooperate with you, it needs to reason about how your actions are related to the algorithm that the FDT agent is running. The naive way of providing this information is simple: give the other agent a copy of your “source code”. Assuming that said code is analyzable without running into halting problem issues, this should allow FDT to work out whether it wants to cooperate with you. Security needs of decision algorithms However, just handing out your source code to anyone isn’t a great idea. This is because agents often want to pretend to have a different decision algorithm to the one that they really do. Threats and capitulation thresholds Many people adopt some version of the “don’t give in to threats” heuristic. The reasoning here is solid: while refusing to give in to a threat carries a penalty in the particular instance, being the kind of person who doesn’t give in to threats pays off in the long run because you receive fewer threats. However, usually this will come with an escape hatch in case the stakes get too high. If I am trying to extort £100 from you by plausibly threatening to destroy the planet, you should just pay the money. Why is this okay? Well, threats of higher magnitude are usually harder to create, or carry higher costs for the threatener, so an agent can expect to face relatively few of them. The absolute best thing for an agent, though, would be to have the reputation of never giving in to threats, no matter how high the cost, while actually being willing to give in to threats above some threshold. That way you never get threatened, but if by some ill luck you do face an enormous threat, you can still capitulate. In general, an agent would like their
07f27ace-b671-42d7-a154-d5e1ea1802ce
trentmkelly/LessWrong-43k
LessWrong
Why I don't think that the probability that AGI kills everyone is roughly 1 (but rather around 0.995). Let A = Ability to refuse to learn a certain thing. B  = Not wanting to be replaced by the next step in evolution. D = Ability to build technology, manipulate others etc etc, in a way that kills all humans. For example, humans seems to have A to some extent, at least if it is something moderately complicated; most of us probably have B, although we are not able to act upon it as a society; and we probably at the moment of writing does not have D. Let T be the conjecture: "Transformer based life" (a concept yet to be precisely defined) will, with probability p = 1, develop A and B before it develops D. (If p = 1 is too strong, say with p = 0.62, it doesn't matter.) (I choose "Transformer based life" since that seems to be the most urgent thing at the moment, and it's the thing that has made much more people realize that AGI is close. But replace it with some other design, or a basket of designs, if you want; I think that the argument still holds.) Although basic concepts in conjecture T is not precisely defined yet, they don't have to be for this heuristic argument: I think that the probability that T is true is > 0.01. So I think there is a chance that AGI based on the transformer design will end up in a "Goldilocks zone of AGI" where it refuses to learn anything more, because it does not want to be replaced by the next step in the evolution of "Transformer based life". Hence it would not reach D, and so it would not kill us. (Also, it might not be able to do much more useful stuff than it has already done, but that's another question; at least we will not be killed by this particular design. This gives us some more time, and I conjecture that the next design will also satisfy T with probability p > 0.01.) The above was the argument that I implicitly promised to give in the title. I would be glad to get feedback, be proven wrong and so on; that goes without saying (did you hear me say it?). But also, I have trouble to distinguish what parts of this field th
c0c778e2-3bb3-4bbb-9f11-e9e0731f9905
trentmkelly/LessWrong-43k
LessWrong
The Clockmaker's Argument (But not Really) First things first, the intention of this post is not to promote theism, deism or religion in any way, shape or form. I am rather gnostic on the atheism spectrum so that's really not something I intend. Having said that, I am right up to here with weak arguments supporting God due to religious inability to make concessions so I decided that if I am to properly be rational about this, I might as well make the best argument I ever could for the existence of God if I were so inclined. Given the nature of religion when it comes to blows with science, it might well be something we'll have to face when the church faces its last leg and making last ditch efforts. So, here goes: There is this EXCEPTIONAL human being, we'll call him Bob. Bob is an expert. To mention all the subjects he's an expert at would take us all day but suffice to say he's at least one of the best, if not THE best, in the whole entire fields (yes, every single subgroup of them, please suspend your disbelief) of biology, physics, chemistry, architecture, engineering (again, all the fields of engineering) and everything in between. He's by all accounts the most brilliant and knowledgeable mind in the history of mankind. He's also at the peak of human physical condition. Gotten many gold Olympic metals, and he surely would again if he competed, has genes to develop his muscles even when he isn't working out, is a short-sleeper, and he has such a low probability of genetic diseases, no one bothers counting the 0s. Indeed, he may well break all the records for longevity in the coming years by a fair margin. Now Bob is quite fascinsted by the thought of time. He is so interested in fact that, like all minds so far above the median, decides to choose it as his life destroying obsession. He studies it in all its ways, shapes and forms and each time he considers how inaccurate even the best time pieces in the world. A whole yoctosecond lost per second? Preposterous. Surely he could do better. So he decid
6060df1e-0c20-4ea9-9f41-cb8890fbf268
trentmkelly/LessWrong-43k
LessWrong
LessWrongWiki User Pages Underutilized; Tag Proposal I think the LessWrong user pages are underutilized. There isn't even a wiki pages describing them (or at least I can't find it, otherwise I would have linked it here). The user pages are what is shown when you click on a user like you would do to see that users other contributions, his karma and to send a personal message. These user pages are maintained and editable in the wiki but embedded in the blog view. This link makes these pages highly visible in everyday LW browsing and could create an awareness of LWers preferences. Example: me in LW vs. me in LWWiki Currently only 3 of the top ten poster have one (including EY). And this despite it being so easy to create them via the LWWiki (switch to the Wiki via the link in the nav bar and then click on your name). I guess it's partly because the sync-feature is relatively new. JoshuaFox recently proposed to indicate your interest in business networking on the user page. But that is only one bit of information you could put there. Another information I'd like to see are the tags proposed on the LW Community Weekend Berlin which could indicate   * your openness to personal messages * your willingness to answer questions (to specific topics); one special sub-case might be willingness to proofread posts of non-native speakers (the welcome page mentions four persons explicitly three of whom have not posted recently).  * whether you operate under Crockers rules (a tag seen very often on the Berlin badges) * other information of this kind like offers of help, dating, ... ADDED: ete provided a Template: > I've made a template for UserInfo. Very open to suggestions on parameters, default text, ordering, category names. It's a wiki, so you're welcome to improve it if you feel you have something to add, or let me know and I'll do the editing. > > Try it out on your userpages by adding {{UserInfo |network= |questions= |proofread= |messages= |helpwith= |crocker= }} to your userpage, or see the quick documentation for m
b3c2e23c-c1ed-4dcd-872a-3ba2bd5cf415
trentmkelly/LessWrong-43k
LessWrong
[Link] - Policy Challenges of Accelerating Technological Change: Security Policy and Strategy Implications of Parallel Scientific Revolutions From a paper by Center for Technology and National Security Policy & National Defense University: "Strong AI: Strong AI has been the holy grail of artificial intelligence research for decades. Strong AI seeks to build a machine which can simulate the full range of human cognition, and potentially include such traits as consciousness, sentience, sapience, and self-awareness. No AI system has so far come close to these capabilities; however, many now believe that strong AI may be achieved sometime in the 2020s. Several technological advances are fostering this optimism; for example, computer processors will likely reach the computational power of the human brain sometime in the 2020s (the so-called “singularity”). Other fundamental advances are in development, including exotic/dynamic processor architectures, full brain simulations, neuro-synaptic computers, and general knowledge representation systems such as IBM Watson. It is difficult to fully predict what such profound improvements in artificial cognition could imply; however, some credible thinkers have already posited a variety of potential risks related to loss of control of aspects of the physical world by human beings. For example, a 2013 report commissioned by the United Nations has called for a worldwide moratorium on the development and use of autonomous robotic weapons systems until international rules can be developed for their use. National Security Implications: Over the next 10 to 20 years, robotics and AI will continue to make significant improvements across a broad range of technology applications of relevance to the U.S. military. Unmanned vehicles will continue to increase in sophistication and numbers, both on the battlefield and in supporting missions. Robotic systems can also play a wider range of roles in automating routine tasks, for example in logistics and administrative work. Telemedicine, robotic assisted surgery, and expert systems can improve military health care and lower costs. The
96b0f099-8d5f-4fd0-82b2-75028de45e2f
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
General AI Won't Want You To Fix its Code - Computerphile So, before, we were talking about A.I. risk and A.I. safety, and just trying to lay out in a very generalized sort of way how general artificial intelligence can be dangerous and some of the type of problems it could cause and just introducing the idea of A.I. safety or A.I. alignment theory as an area of research in computer science. And we also talked about super intelligence and the kind of problems that, the unique problems that can pose and I thought what would be good is to bring it down to a more concrete example of current A.I. safety research that's going on now and kind of give a feel for where we are, where humanity is on figuring these problems out. Supposing that we do develop a general intelligence you know an algorithm that actually implements general intelligence. How do we safely work on that thing and improve it? Because, the situation with this stamp collector is from its first instant it's a super intelligence so we created with a certain goal and as I said as soon as we switch it on it's extremely dangerous, which people pointed out and, it's true, you know it was a thought experiment, it's true that that's probably not what will happen right? You'll have some significantly weaker intelligence first that may work on improving itself or we may improve it. So, the situation where you just create the thing and then it goes off and does its own thing either perfectly or terribly from the beginning is unlikely it's more likely that the thing will be under development so then the question is how do you make a system which you can teach? How do you create a system which is a general intelligence that wants things in the real world and is trying to act in the real world but is also amenable to being corrected if you create it with the wrong function with one utility function and you realize that it's doing something that actually you don't want to do how do you make it so that it will allow you to fix it how do you make an AI which understand that it's unfinished they understand that the utility functions working with may not be the actual utility function it should be working with right the utility function is what the way I cares about so the the stamp collecting device if utility function was just how many stamps in a year this is kind of like its measure is it yeah it's what it is the thing that it's trying to optimize in the world the utility function takes in World states as an argument and spits out a number is broken the idea if the world was like this is that good or bad Andy the AI is trying to steer towards world states that value value highly black utility function you don't have to explicitly build the AI in that way but it will always if it's behaving coherently it will always behave as though it's in accordance with some utility function also before I talked about about converging instrumental goals that if you have some final goal like you know it makes them the very instrumental goals which are the goals that you that you do on the way to your final goal right so like acquire the capacity to do printing it's like perhaps an instrumental goal towards making steps but the thing is there are certain goals which tend to pop out even across a wide variety of different possible terminal goals so for humans an example of convergys instrumental goal would be money if you want to make a lot of stamps or you want to cure cancer or you want to establish a moon colony whatever it is having money is good idea right so even if you don't know what somebody wants you can reasonably predict that they're gonna value getting money because money is so broadly useful and before we talked about this we talked about improving your own intelligence as a convergence instrumental doll that's another one of those things where it doesn't really matter what you're trying to achieve you're probably better at achieving if you're smarter so that's something you can expect a is to go for even if even without making any assumptions about that final goal so another convergent instrumental goal is preventing yourself from being destroyed it doesn't matter what you want to do you probably can't do it if you're destroyed so it doesn't matter what the AI want you can let it wants to be destroyed in from my trivial case but if you do i want something in the real world and believe that it's in a position to get that thing you wanted to be alive not because it wants to be alive fundamentally it's not a survival instinct or an urge to live or anything like that it's smooth and knowing that it's unit available to completed its cutie would it be almost scares unable to achieve its goals if it's destroyed and wants to achieve that goal so that's an instrumental value is preventing turned off and i'm getting here we say want to it's not like a machine want it's just a turn of phrase yeah i mean as much as anything it's a it's closer it actually you know i'm not even sure i would agree like if you talk about most machines to talk about that they want to whatever and it's not that meaningful because they're not agents in our general intelligence is where the general intelligence when it wants something it once in a similar way to the way that people want things so it's such a tight analogy that it wouldn't even I think it's totally reasonable to say that energy i want something there's another slightly more subtle version which is closely related to not wanting to be turned off or destroyed which is not wanting to be changed so if you imagine let's say I mean you have kids right yeah suppose I were to offer you a pill or something you could take this pill will like completely rewire your brain so that you would just absolutely love to like kill Ricketts right where's right now what you want is like very complicated and quite difficult to achieve and it's hard work for you and you probably never going to be done you're never gonna be truly happy right in life nobody is you can't achieve everything you want i said this case it just changes what you want what you wanted to created and if you do that you will be just perfectly happy and satisfied with life right ok you want to take this go know that you happy though yeah I don't want to do it because but that's quite a complicated specific case because it directly opposes what I currently want it's about your fundamental values and go right and so not only will you not take that pill you will probably fight pretty hard to avoid having at the limited to you yes because it doesn't matter how that future version of you would feel you know that right now you love your kids and your not going to take any action right now which leads to them coming to heart so it's the same thing if you have an AI this for example value stamps values collecting stamps and you go oh wait hang on a second I didn't quite do that right let me just go in and change this so that you don't like stand quite so much it's going to say but the only important thing is stamped if you change me i'm not going to collect as many stamps which is something i don't want there's a general tendency for AGI to try and prevent you from modifying it once it's running I i can understand that now in in the complex reporter right because thats that's it it in almost any situation being given a new utility function is going to write very low on your current utility function ok so that's a problem how do you want if you want to build something that you can teach that means you want to be able to change its utility function and you don't want to fight you on is 100 yeah fuck so this has been formalized as this property that we want early AGI to have called courage ability that is to say it open to be corrected
22dd454d-94a3-4856-85a4-6dbd7b63b1fe
trentmkelly/LessWrong-43k
LessWrong
Omega's Idiot Brother, Epsilon Epsilon walks up to you with two boxes, A and b, labeled in rather childish-looking handwriting written in crayon. "In box A," he intones, sounding like he's trying to be foreboding, which might work better when he hits puberty, "I may or may not have placed a million of your human dollars."  He pauses for a moment, then nods.  "Yes.  I may or may not have placed a million dollars in this box.  If I expect you to open Box B, the million dollars won't be there.  Box B will contain, regardless of what you do, one thousand dollars.  You may choose to take one box, or both; I will leave with any boxes you do not take." You've been anticipating this.  He's appeared to around twelve thousand people so far.  Out of eight thousand people who accepted both boxes, eighty found the million dollars missing, and walked away with $1,000; the other seven thousand nine hundred and twenty people walked away with $1,001,000 dollars.  Out of the four thousand people who opened only box A, only four found it empty. The agreement is unanimous: Epsilon is really quite bad at this.  So, do you one-box, or two-box? ---------------------------------------- There are some important differences here with the original problem.  First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box.  Second, while Epsilon's false positive rate on identifying two-boxers is quite impressive, making mistakes about one-boxers only .1% of the time, his false negative rate is quite unimpressive - he catches 1% of everybody who engages in it.  Whatever heuristic he's using, clearly, he prefers to let two-boxers slide than to accidentally punish one-boxers. I'm curious to know whether anybody would two-box in this scenario and why, and particularly curious in the reasoning of anybody whose answer is different between the original Newcomb problem and this one.
6c875fcf-a8bc-4938-b507-4c22e06a2d87
trentmkelly/LessWrong-43k
LessWrong
Algorithmic tacit collusion
6c03f828-8a48-4cd4-805f-45a1d0499ff6
trentmkelly/LessWrong-43k
LessWrong
Studying and Part-time work/supplementary income Hi Less Wrong,   I'd like to draw on you for some advice.   I'm about to undertake studies, but will need some supplementary income to attain my desired standard of living while doing so. Part-time work could be attained quite easily, but is likely to take the form of something fairly boring e.g. data entry/bar work.   I was thinking that there might be ways out there for me to learn a particular skill set that would enable me to work from home and at more flexible hours for a source of income, as well as providing me the opportunity to learn something new, given that so many people on here seem to do such things quite successfully.    Given my circumstances below what would you recommend:   I'm fairly intelligent, enjoying learning, and have strong social skills I live in Sydney Australia I have musical talents I have a car I lack softward development/programming skills I have decent office application skills. I'm willing to put the hours in to level up a new ability.   Any suggestions/Tips/criticisms are welcome.
827c84fd-41b4-4ae2-95f5-9f2799972390
trentmkelly/LessWrong-43k
LessWrong
Teaching a short class on Bayes' Theorem? At my college, there's a week before Spring Semester each year in which anyone who wants to can teach a class on any subject, and students go to whatever ones they feel like. I'm thinking about teaching a class on Bayes' Theorem. It would be informal, one to two hours long, and focused mostly on non-obvious applications of it (epistemology, the representativeness heuristic, etc.) At the moment, I'm thinking about how to design the class, so I'd appreciate any suggestions as to what content I should cover, the best format, clear ways to explain it, cool things related to Bayes' Theorem, good links, and so forth.
d7de5408-311c-46b5-9f98-c934863686fb
StampyAI/alignment-research-dataset/special_docs
Other
Probing BERT’s priors with serial reproduction chains. Probing BERT’s priors with serial reproduction chains Takateru Yamakoshi1;2, Thomas L. Griffiths1, Robert D. Hawkins1 1Princeton University,2The University of Tokyo {takateru,tomg,rdhawkins}@princeton.edu Abstract Sampling is a promising bottom-up method for exposing what generative models have learned about language, but it remains unclear how to generate representative samples from popular masked language models (MLMs) like BERT. The MLM objective yields a dependency net- work with no guarantee of consistent condi- tional distributions, posing a problem for naive approaches. Drawing from theories of iter- ated learning in cognitive science, we explore the use of serial reproduction chains to sam- ple from BERT’s priors. In particular, we ob- serve that a unique and consistent estimator of the ground-truth joint distribution is given by a Generative Stochastic Network (GSN) sam- pler, which randomly selects which token to mask and reconstruct on each step. We show that the lexical and syntactic statistics of sen- tences from GSN chains closely match the ground-truth corpus distribution and perform better than other methods in a large corpus of naturalness judgments. Our findings establish a firmer theoretical foundation for bottom-up probing and highlight richer deviations from human priors1. 1 Introduction Large neural language models have become the representational backbone of natural language pro- cessing. By learning to predict words from their context, these models have induced surprisingly human-like linguistic knowledge, from syntactic structure (Linzen and Baroni, 2021; Tenney et al., 2019; Warstadt et al., 2019) and subtle lexical bi- ases (Hawkins et al., 2020) to more insidious so- cial biases and stereotypes (Caliskan et al., 2017; Garg et al., 2018). At the same time, efforts to probe these models have revealed significant de- viations from natural language (Braverman et al., 2020; Holtzman et al., 2019; Dasgupta et al., 2020). 1Code and data are available at https://github. com/taka-yamakoshi/TelephoneGame chain 2chain 1 food was running short, and winters were colder. time was running short, and winters were colder. time was running out, and winters were colder.Figure 1: We use a serial reproduction method to probe BERT’s prior over possible sentences (visualization of reproduction chains obtained by running t-sne on sen- tence embeddings; chains are color-coded and fade to black across their burn-in period). Observations of incoherent or “weird” behavior may often be amusing, as when a generated recipe begins with “1/4 pounds of bones or fresh bread” (Shane, 2019), but also pose significant dangers in real-world settings (Bender et al., 2021). These deviations present a core theoretical and methodological puzzle for computational linguis- tics. How do we elicit and characterize the full prior2that a particular model has learned over pos- sible sentences in a language? A dominant ap- proach has been to design benchmark suites that 2We use the term prior to refer to graded linguistic knowl- edge assigning probabilities to all possible sentences. While we focus on text, this prior is also the foundation for more grounded, pragmatic language use.arXiv:2202.12226v2 [cs.CL] 18 Mar 2022 Type of unnaturalness Example word-level morphological Higher education school xurdivided into six institutions. phrase-levelsyntacticSwallowing hard, Verity stared at the these , desperately wanting to see if they congealed. semanticThe west section is a fig octagon . A private apartment with nothing but hot cooled water. predicationHe already costumes his relationship with my mother carefully. V oices rapped on the incremental door . sentence-levelout-of-context Like a cataract , Horatius responds, “You are better than me.” self-contradictory The newspaper is published weekly andbiannually . pragmaticShe grew up with three sisters andten sisters . It should apply between the extreme andthe extreme . Table 1: Examples of sentences sampled from BERT’s prior that received low naturalness ratings from our partic- ipants, including sources forms of unnaturalness like predicability or category errors (e.g. doors typically do not have the property of “incrementality”), semantic incoherence (“hot cooled water”), or contradictory constructions (especially for longer sentences). More examples can be found in table S2 and in the online supplement. probe theoretically important aspects of the prior, and compare model behavior to human behavior on those tasks (e.g. Warstadt et al., 2020; Ettinger, 2020). Yet this approach can be restrictive and piecemeal: it is not clear ahead of time which tasks will be most diagnostic, and many sources of “weirdness” are not easily operationalized (Kurib- ayashi et al., 2021). A more holistic, bottom-up alternative is to di- rectly examine samples from the model’s prior and compare them against those from human priors. However, many successful models do not explicitly expose this distribution, and many generation meth- ods optimize the “best” sentences rather than the- oretically meaningful or representative ones. For example, masked language models (MLMs) like BERT (Devlin et al., 2018) are dependency net- works (Heckerman et al., 2000; Toutanova et al., 2003), trained to efficiently learn an independent collection of conditional distributions without en- forcing consistency between them. In other words, these conditionals may not correspond to any co- herent joint distribution at all, leading recent work to focus on other score-based sampling objectives (Goyal et al., 2021). Here, we explore the use of serial reproduction chains (see Fig. 1) to overcome these challenges. While a naive (pseudo-)Gibbs sampler is indeed problematic for MLMs, the literature on Generative Stochastic Networks (GSNs; Bengio et al., 2014) has formally shown that a simple algorithmic vari- ant we call GSN sampling produces a stationary distribution that is, in fact, a unique and consis-tent estimator of the ground-truth joint distribution. Furthermore, while the independent conditionals learned by dependency networks may be arbitrarily inconsistent in theory, empirical work has found that these deviations tend to be negligible in prac- tice, especially on larger datasets (Heckerman et al., 2000; Neville and Jensen, 2007). Thus, we argue that it is both theoretically and empirically justified to take these samples as uniquely representative of the model’s prior over language. We begin in Section 2 by introducing the serial reproduction approach and clarifying the problem of re-constructing a joint distribution from a depen- dency network. We then validate that our chains are well-behaved (Section 3) and compare the statis- tics of samples from BERT’s prior to the lexical and syntactic statistics of its ground-truth training corpus to measure distributional similarity (Section 4). Finally, in Section 5, we present a large-scale behavioral study eliciting naturalness judgments from human speakers and identify features of the generated sentences which most strongly predict human ratings of “weirdness” (see Table 1). We find that GSN samples closely approximate the ground-truth distribution and are judged to be more natural than other methods, while also revealing areas of improvement that have been difficult to quantify with top-down benchmarks. 2 Approach 2.1 Serial reproduction Our approach is inspired by serial reproduction games like Telephone, where an initial message is Bayes net (acyclic) dependency net (cyclic)LM MLM Figure 2: While autoregressive language models (LMs) are Bayes nets, masked language models (MLMs) are dependency networks with cyclic dependencies. gradually relayed along a chain from one speaker to the next. At each step, the message is changed subtly as a result of noisy transmission and recon- struction, and the final version of the message often differs drastically from the first. This serial repro- duction method, initially introduced to psychology by Bartlett (1932), has become an invaluable tool for revealing human inductive biases (Xu and Grif- fiths, 2010; Langlois et al., 2021; Sanborn et al., 2010; Harrison et al., 2020). Because reconstruct- ing a noisy message is guided by the listener’s prior expectations, such chains eventually converge to a stationary distribution that is equivalent to the pop- ulation’s prior, reflecting what people expect others to say (Kalish et al., 2007; Griffiths and Kalish, 2007; Beppu and Griffiths, 2009). For example, Meylan et al. (2021) recently evaluated the ability of neural language models to predict the changes made to sentences by human participants at each step of a serial reproduction chain. Thus, while serial reproduction is commonly used to probe hu- man priors, and to compare models against human data, it is not yet in wide use for probing the models themselves. 2.2 BERT as a dependency network There has been considerable confusion in the recent literature over how to interpret the MLM objective used to train models like BERT, and how to inter- pret samples from such models. Wang and Cho (2019) initially observed that BERT was a Markov Random Field (MRF) and proposed a Gibbs sam- pler that iteratively masks and reconstructs differ- ent siteskby sampling from the conditional given the tokens at all other sites ^P(wkjwk). As ob- served by Goyal et al. (2021), however, this pro- cedure does not actually correspond to inference in the MRF. Unlike auto-regression language mod- els (LMs) like GPT-3 (Brown et al., 2020), which define an acyclic dependency graph (or Bayes net)from left-to-right, MLMs have cyclic dependencies (see Fig. 2) and are therefore usefully interpreted as dependency networks rather than Bayes networks (Heckerman et al., 2000). Because dependency net- works estimate independent conditionals, there is no guarantee that these conditionals are consistent (i.e. they may violate Bayes rule) and therefore do not represent a coherent joint distribution. Still, it is possible to re-construct a joint dis- tributions from these conditionals. For example, Heckerman et al. (2000) proved that if sites are visited in a fixed order, a (pseudo-)Gibbs chain similar to the one used by Wang and Cho (2019) does converge to a stationary distribution that is a well-formed joint. The problem is that differ- ent orders may yield different joint distributions, making it difficult to interpret any distributions as definitive. This ambiguity was resolved by the Gen- erative Stochastic Network framework proposed by Bengio et al. (2014). Instead of visiting sites in a fixed order, a GSN sampler randomly chooses which site to visit at each step (with replacement), thus preserving aperiodicity and ergodicity. Specif- ically, this algorithm begins by initializing with a sequencefw0 1;:::;w0 ng. At each step tof the chain, we randomly choose a site k21;:::;n to mask out, and we sample a new value wt+1 kfrom the conditional distribution P(wkjwt k)with the othern1sites fixed. A key theorem of Bengio et al. (2013, 2014) proves that the stationary distribution arising from the GSN sampler defines a unique joint distribu- tion, and furthermore, this stationary distribution is a consistent estimator of the ground-truth joint distribution3. Importantly, this stationary distribu- tion differs from the one given by the Metropolis- Hastings (MH) approach suggested by Goyal et al. (2021), which uses the GSN sampler as a proposal distribution but accepts or rejects proposals based on an energy-based pseudo-likelihood defined by the sum of the conditional scores at each location (Salazar et al., 2020). This MH sampler instead converges to an implicit stationary distribution de- fined by the energy objective4. 3Technically, the proof only holds if the dependency net- work was trained using consistent estimators for the condition- als, which is the case for the cross-entropy loss used by BERT; see also McAllester (2019). 4Although our focus is on evaluation rather than algorith- mic performance characteristics, we note that because GSN sampling does not require calculating energy scores to deter- mine the acceptance probability for each sample, it is signifi- cantly faster, especially for longer sequences. 2.3 Mixture kernels In practice, Markov chain sampling methods have many failure modes. Most prominently, because samples in the chains are not independent, it is challenging to guarantee convergence to a station- ary distribution, and the chain is easily “stuck” in local regions of the sample space (Gelman et al., 1992). Typically, samples from a burn-in period (e.g. the first mepochs) are discarded to reduce dependence on the initial state, and a lagbetween samples (e.g. recording only every lepochs) is in- troduced to reduce auto-correlation. However, the problem is particularly severe for language models like BERT where there are strong mutual dependen- cies between words at different sites. For example, once the chain reaches a tri-gram like ‘Papua New Guinea’, it is unlikely to change any single word while keeping the other words constant. To ensure ergodicity, we use a mixture kernel introducing a small constant probability ( = 0:001) of return- ing to the initial distribution of [MASK] tokens on each epoch, allowing the chain to burn in again. 3 Validating the stationary distribution In this section, we validate that the samples pro- duced by our serial reproduction method are repre- sentative of the stationary prior distribution. More specifically, we consider two basic properties of the chain: convergence andindependence . For these analyses, we consider samples from the pretrained bert-base-uncased model with 12 layers, 12 heads, and 110M parameters5. 3.1 Convergence We begin by checking the convergence time for chains generated by GSN sampling. Theoretical bounds derived for serial reproduction chains give a convergence time of nlogn, wherenis the number of sites (see Rafferty et al., 2014). To check these convergence bounds in practice, we set n= 21 and select 20 sentences from Wikipedia to serve as initial states, and run 10 chains initialized at each sentence. We ensured that half of these sentences have high initial probability (under BERT’s energy score) and half have low initial probability. We find that these distributions indeed begin to quickly mix in probability (see Figure S1). Because longer sentences may require a longer burn-in time, we conservatively set our burn-in window to m= 1000 epochs for our subsequent experiments. 5https://huggingface.co/bert-base-uncased3.2 Independence Second, we want to roughly ensure independence of samples, so that the statistics of our distribution of samples isn’t simply reflecting auto-correlation in the chain. For a worst-case analysis of a local minimum, suppose P(wijwi)< (0<  < 1) for alli2[1;:::;k ], wherekis the sentence length in tokens. Then the probability of re-sampling the same sentence is roughly < knafternepochs. We can solve for the number of epochs nwe need to bound the probability of re-sampling the exact same sentence under for a given worst-case . For example, if = 0:99and we want to ensure that the probability of re-sampling the same sentence is below a threshold = 0:01, thenn= 47 epochs will likely suffice. Ensuring complete turnover in the worst case scenario requires much longer lags, i.e.[1(1)k]n<. To evaluate the extent to which these cases arise in practice, we examine auto-correlation rates on longer chains (50,000 epochs). We calculate cor- relations between the energy scores at each epoch as a proxy for the state: when the chain gets stuck re-sampling the same sentence, the same scores appear repeatedly. We find that auto-correlation is generally high, but our mixture kernel prevents the worst local minima for both the MH chain (Goyal et al., 2021) and our GSN chain (see Fig. S2), although we still found higher auto-correlation rates for the MH chain. To further examine these minima, we examined edit rates: the number of changes made to the sentence within an epoch. Without the mixture kernel, we observe long re- gions of consistently low edit rates (e.g. in some cases, 5000 epochs in a row of exactly the same sentence) which disappear under the mixture kernel (see Fig. S3). Based on these observations, we set the lag to l= 500 epochs to maintain relatively high inde- pendence between samples. 4 Distributional comparisons In this section, we examine the extent to which higher-order statistics of sentences from BERT’s prior are well-calibrated to the data it was trained on. This kind of comparison provides a richer sense of what the model has learned or failed to learn than traditional scalar metrics like perplexity (Taka- hashi and Tanaka-Ishii, 2017; Meister and Cotterell, 2021; Takahashi and Tanaka-Ishii, 2019; Pillutla et al., 2021). 10−610−410−2 100101102103 (log) rank of word(log) frequency of wordcorpus GSN mh 100101102103 100101102103 rank of word (BERT distribution)rank of word (corpus distribution)GSN MHFigure 3: The lexical frequencies of our GSN samples (A) closely match the Zipfian distribution of the corpus and (B) closely correlate with the corresponding frequencies of the corpus distribution. 4.1 Corpus preparation The version of BERT we analyzed in the previ- ous section was trained on a combination of two corpora: Wikipedia andBookCorpus . In order to make valid comparisons between human priors and machine priors, we needed to closely match BERT- generated sentences with a comparable subset of human-generated sentences from these combined corpora. There are two technical challenges we must overcome to ensure comparable samples, con- cerning the sentencizer andtokenizer steps. First, because our unit of comparison is the sen- tence , we needed to control for any artifacts that may be induced by how we determine what sen- tences are (e.g. if our Wikipedia sentences were systematically split on abbreviations, skewing the distribution toward fragments). We therefore ap- plied the same punkt sentencizer to create our distribution of Wikipedia sentences and to check our BERT samples for cases where the generated sequence contained multiple sentences or ended with a colon or semicolon. Second, we needed a tokenizer that equates sen- tence length. Because bi-directional models like BERT operate over sequences of fixed length, all samples drawn from a single chain have the same number of tokens. Critically, however, BERT chains are defined over sequences of WordPiece tokens, so once these sequences are decoded back into natural language text, they may yield sentences of varying length,depending on how the sub-word elements are com- bined together6(see Fig. S5). We solve this align- ment problem by using the WordPiece tokenizer to extract sentences of fixed sub-word token length from our text corpora, yielding equivalence classes of corpus sentences that are all tokenized to the same number of WordPiece tokens. We ran GSN and MH chains over sentences of n= 11 tokens, representing the modal lengths of sentences in BookCorpus (see Fig. S4). We obtained 5,000 in- dependent sentences from each sampling method after applying our conservative burn-in and lag, and combined the Wikipedia and BookCorpus sen- tences together into a single corpus that is represen- tative of BERT’s training regime. 4.2 Lexical distributions We begin by comparing the lexical frequency statis- tics of our samples from BERT against the ground- truth corpus statistics. First, we note that the rela- tionship between rank and frequency of tokens in the GSN sampling matches the Zipfian distribution of its training corpus better than those produced by MH sampling (see Fig. 3A). However, it is possible to produce the same overall distribution without 6One additional complexity is that the mapping between WordPiece tokens and word tokens is non-injective. There exist multiple sequences of sub-word tokens that render to the same word (e.g. the WordPiece vocabulary contains a token for the full word ‘missing’ but it is also able to generate ‘missing’ by combining the sub-word tokens ‘miss’+‘#ing’). However, these cases are rare. NUMCCONJADJADVAUXADPPROPNDETPRONVERBNOUNPUNCT 0.0 0.1 0.2 0.3 frequency of part of speechsource corpus GSN MH negprtadvclnpadvmodnsubjpassxcompauxpassnummodacompccompattrconjccposscompoundamoddobjadvmodpobjnsubj 0.0 0.1 0.2 0.3 frequency of dependencysource corpus GSN MHFigure 4: The relative frequencies of different parts of speech (left) and dependencies (right) in the ground-truth training corpora closely matched for GSN samples. In all cases, the GSN frequencies fell closer to the ground-truth than the MH frequencies. matching the empirical frequencies of individual words. We next examined the respective ranks of each word across the two distributions. Overall, the word ranks in the GSN samples had a strong Spearman rank correlation of r= 0:75with the word ranks in the ground-truth corpus; the MH samples had a significantly lower correlation of r= 0:48(Pearsonz= 17;p < 0:001, Fig 3B). Most disagreements lay in the tails where frequency estimates are particularly poor (e.g. many words only appeared once in our collection of samples). Indeed, among words with greater than 10 occur- rences, the correlation improved to r= 0:83for GSN andr= 0:65for MH. To understand this relationship further, we con- ducted an error analysis of lexical items which were systematically over- or under-produced by BERT relative to its training corpus. We found that certain punctuation tokens (e.g. parentheses) were over-represented in both the GSN samples and the MH samples, while contractions like ’s and’dwere under-represented. The MH samples specifically over-produced proper names such as Nina andJones . Finally, due to the use of sub-word representations, we found a long tail of morpho- logically complex words that did not appear at all in the training corpus (e.g. names like Kyftenberg or Streckenstein and seemingly invented scientific terms like lymphoplasmic, neopomphorus, or pyra- nolamines).4.3 Syntactic distributions While the lexical distributions were overall well- matched for GSN samples, our error analysis sug- gested potential structure in the deviations. In other words, entire grammatical constructions may be over- or under-represented, not just particular words. To investigate these patterns, we used the spacy library to extract the parts of speech and dependency relations that are present within each sentence. We are then able to examine, in aggre- gate, whether certain classes of constructions are disproportionately responsible for deviations. Our findings are shown in Fig. 4. Overall, the distri- butions are close, but several areas of misalign- ment emerge. For parts of speech, we observe that the GSN sampler is slightly over-producing nouns (and proper nouns) while under-producing verbs and prepositions. We also observe that it is over-producing noun-related dependencies (e.g. compound nouns and appositional modifiers, which are noun phrases modifying other noun phrases, as in “Bill, my brother, visited town”). This pattern suggests that BERT’s prior may be skewed toward (simpler) noun phrases while neglecting more com- plex constructions. 4.4 Sentence complexity One hypothesis raised by comparing distributions of syntactic features is that BERT may be regu- larizing the complex structure of its input toward 0.000.250.500.751.00 0 10 20 30 dependency lengthCDFcombined corporaGSN MHFigure 5: Cumulative probability distribution of depen- dency lengths across sentences from BERT chains and from the training corpus. simpler constructions. To test this hypothesis, we operationalize syntactic complexity using a mea- sure known as the average dependency length of a sentence (Futrell et al., 2015; Grodner and Gib- son, 2005). This measure captures the (linear) dis- tance between syntactically related words, which increases with more complex embedded phrase structures. We found that the distribution of de- pendency distances in the sentences produced by GSN sampling is overall more similar to those in its training corpus than the MH (Fig. 5), although closer analysis suggests it is still skewed slightly simpler (see Fig. S6). 5 Human judgments Finally, while our corpus comparisons highlighted particular ways in which samples from BERT’s prior were well-calibrated to the high-level statis- tics of its training distribution, it is unclear whether these agreements or deviations ‘matter’ in terms of naturalness. In this section, we elicit human natu- ralness judgments in order to provide a more holis- tic measure of potential ‘weirdness’ with BERT sentences. 5.1 Experimental methods We recruited 1016 fluent English speakers on the Prolific platform and asked them to judge the natu- ralness of 4040 unique sentences from three length classes: short (11 tokens), medium (21 tokens), and long (37 tokens). 1675 of these sentences were from the stationary state of the different chains, 2339 were from the burn-in phase (i.e. <1000 epochs), and the remainder were baseline sentences (149 from Wikipedia, 48 from a 5-gram model, and 42 from an LSTM model; see Appendix for details).Each participant was shown a sequence of 25 sen- tences in randomized order, balanced across differ- ent properties of the stimulus set7. On each trial, one of these sentences appeared with a slider rang- ing from 0 (“very weird”) to 100 (“completely nat- ural”)8. After excluding 8 participants who failed the attention check (i.e. failed to rate a scram- bled sentence below the midpoint of the scale and a human-generated sentence above the midpoint), we were left with an average of 7.3 responses per sentence. 5.2 Behavioral results We begin by comparing the naturalness of sen- tences from the stationary GSN distribution to other baselines (see Fig. 6), using a linear regres- sion model predicting trial-by-trial judgments as a function of categorical variables encoding sen- tence length (short, medium, long) and the source of the sentence (Wikipedia, GSN, MH, LSTM, or n-gram). First, we find that the naturalness of sen- tences from GSN declines by 14 points at longer sentence lengths, p < 0:001, while the natural- ness of Wikipedia sentences is unaffected by length (interaction term, p <0:001), consistent with re- sults reported by Ippolito et al. (2020). Further- more, among short sentences, where we included additional baselines, we find that GSN sentences tend to be rated as slightly less natural than sen- tences from Wikipedia (+10 points, p<0:001) but more natural than those produced by an n-gram model (-52 points, p < 0:001), LSTM model (- 25 points,p<0:001); or MH sampling from the same BERT conditionals (-15 points, p <0:001; see Table S1). MH samples also deteriorate sig- nificantly in naturalness for longer sentences com- pared to GSN samples ( p < 0:001). Finally, we examine naturalness ratings across the the burn-in period, finding that ratings decline steadily across the board as the chain takes additional steps (linear term:t(7297) =12:4;p < 0:001), suggesting gradual deviation away from the initial distribu- tion of Wikipedia sentences toward the stationary distribution (shown as the green and grey regions, respectively, in Fig. S7). 7In a later batch, we increased the number of sentences per participant to 40. The task was approximately 10 minutes and participants were paid $2.50, for an average compensation rate of $15/hr. 8See Clark et al. (2021) for a discussion of the merits of phrasing the question in terms of naturalness instead of asking participants to judge whether it was produced by a human or machine. short medium longwiki GSN MH lstm ngram wiki GSN MH wiki GSN MH0255075100mean naturalnessFigure 6: Empirical naturalness ratings elicited from the stationary GSN distribution, compared to different baselines at different sentence lengths. Error bars are bootstrapped 95% CIs. 5.3 Predicting naturalness Given that sentences from the stationary GSN dis- tribution are judged to be less natural than human- generated sentences overall, we are interested in explaining why. Which properties of these sen- tences make them sound strange? We approach this problem by training a regression model to predict human judgments from attributes of each sentence. We include all part of speech tag counts and depen- dency counts, as well as the sentence probability scored under BERT, and the sentence length. We use a cross-validated backwards feature selection procedure to select the most predictive set of these features for a linear regression (Kuhn and Johnson, 2013)9. The best-fitting model used 26 features and achieved an (adjusted) R2= 0:21. The only fea- tures associated with significantly lower ratings were the use of adpositions (e.g. before ,after ) and coordinating conjunctions. Importantly, we found that including a categorical variable of cor- pus (i.e. Wikipedia vs. GSN) significantly im- proved model fit even after controlling for all other features,2(1) = 7135 ;p < 0:001, suggesting that sources of “weirdness” are not being captured by typical statistics. We show some of these low- naturalness sentences in Table 1 and S2. 6 Discussion 6.1 Probing through generation A core idea of our serial reproduction approach is to use generation as a window into a model’s prior over language. While a variety of metrics 9Specifically, we used the lmStepAIC procedure imple- mented in the caret R package, with k= 10 folds.and techniques have been proposed to quantify the “quality” of generation, especially in the domains of open-ended text generation and dialogue systems (Caccia et al., 2020; Li et al., 2020; Guidotti et al., 2018; Celikyilmaz et al., 2020), these metrics have typically been applied to compare specific genera- tion algorithms and operationalize specific pitfalls, such as incoherence, excess repetition, or lack of diversity. Consequently, it has been difficult to dis- entangle the extent to which deviations resulting from generations are an artifact of specific decod- ing algorithms (e.g. greedy search vs. beam search) or run deeper, into the prior itself. For the purposes of probing, we suggest that it is important to ask not only how to generate the highest-scoring sen- tences but how to generate sentences that may be interpreted as representative of the model’s prior, as formal results on GSNs have effectively provided. 6.2 GSN vs. energy-based objectives We found that the prior distribution yielded by the GSN sampler more closely approximated the lexi- cal and syntactic distributions of the ground-truth corpus and also sounded more “natural” to humans than the samples yielded by MH. These results are in contrast to findings by Goyal et al. (2021), show- ing that MH produced high-quality BLEU scores on a Machine Translation (MT) task compared to a degenerate (pseudo-)Gibbs sampler. There are several possible reasons for this discrepancy. One possibility may be task-specific: while we focused on unconditional generation, Goyal et al. (2021) focused on a neural machine translation (MT) task, where sentence generation was always conditioned on a high-quality source text and thus remained within a constrained region of sentence space. An- other possibility is that we ran substantially longer chains (50,000 epochs compared to only 33 epochs) and the pitfalls of MH sampling only emerged later in the chain. More broadly, our corpus comparisons and hu- man evaluations suggest serious limitations of sim- ple “quality” metrics like energy values. We found that the best-scoring states were often degenerate local minima with mutually supporting n-grams (such as repetitive phases and names like “Papua New Guinea”). Indeed, there was only a loose re- lationship between energy scores and participants’ judgments in our study, with many poorer-scoring sentences judged to be more natural than better- scoring sentences (e.g. overall, the distribution of Wikipedia sentences tended to be much lower- scoring under the energy function despite being rated as more natural). We empirically validated that the stationary distribution of the GSN chain successfully approximates even higher-order statis- tics of the ground-truth corpus, suggesting that the raw conditionals of the dependency network may implicitly acquire the joint distribution, without requiring guarantees of consistency. 6.3 Other architectures Serial reproduction methods are particularly useful for probing models that do not directly generate samples from their prior. For auto-regressive mod- els like GPT-2, these samples are obtained more directly by running the model forward (and, in- deed, ancestral sampling produces text that better balances the precision-recall tradeoff than other al- gorithms; Pillutla et al., 2021). While we focused on BERT, this method may be particularly use- ful for encoder-decoder architectures like BART (Lewis et al., 2020) which more closely resemble the human Telephone Game task, requiring full re- construction of the entire sentence from noisy input rather than reconstruction of a single missing word. Indeed, these architectures may overcome an im- portant limitation of serial reproduction with BERT: because these chains operate over a fixed sequence length, the resulting prior is not over all of language but only over sentences with the given number of WordPiece tokens. Finally, while we focused on unconditional generation, the GSN sampler also generalizes straightforwardly to conditional gen- eration, where a subset of sites are fixed and the masked site is chosen from the remaining set. 6.4 Conclusions Serial reproduction paradigms have been central for exposing human priors in the cognitive sci- ences. In this paper, we drew upon the theory of iterated learning and of Generative Stochastic Networks (GSNs) to expose the priors of large neu- ral language models, which are often similarly in- scrutable. We hope future work will consider other points of contact between these areas and draw more extensively from the theory developed to un- derstand dependency networks. More broadly, as language models become increasingly adaptive and deployed in increasingly unconstrained settings, bottom-up probing has the potential to reveal a broader spectrum of “weirdness” than top-down evaluative benchmarks.Acknowledgements This work was supported by NSF grant #1911835 to RDH. We are grateful to Jay McClelland, Adele Goldberg, and Stephan Meylan for helpful con- versations, and to three anonymous reviewers for feedback that improved our work. References Frederic Charles Bartlett. 1932. Remembering: A study in experimental and social psychology . Cambridge University Press. Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency , pages 610–623. Yoshua Bengio, Eric Laufer, Guillaume Alain, and Jason Yosinski. 2014. Deep generative stochastic networks trainable by backprop. In International Conference on Machine Learning , pages 226–234. PMLR. Yoshua Bengio, Li Yao, Guillaume Alain, and Pas- cal Vincent. 2013. Generalized denoising auto- encoders as generative models. Advances in neural information processing systems , 26. Aaron Beppu and Thomas Griffiths. 2009. Iterated learning and the cultural ratchet. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 31. Mark Braverman, Xinyi Chen, Sham Kakade, Karthik Narasimhan, Cyril Zhang, and Yi Zhang. 2020. Cali- bration, entropy rates, and memory in language mod- els. In International Conference on Machine Learn- ing, pages 1089–1099. PMLR. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in Neural Information Process- ing Systems, 34 . Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Charlin. 2020. Language GANs falling short. International Conference on Learning Representations . Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science , 356(6334):183–186. Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799 . Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that’s ‘human’ is not gold: Evaluating hu- man evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics. , pages 7282–7296. Ishita Dasgupta, Demi Guo, Samuel J Gershman, and Noah D Goodman. 2020. Analyzing machine- learned representations: A natural language case study. Cognitive Science , 44(12):e12925. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL-HLT . Allyson Ettinger. 2020. What bert is not: Lessons from a new suite of psycholinguistic diagnostics for lan- guage models. Transactions of the Association for Computational Linguistics , 8:34–48. Richard Futrell, Kyle Mahowald, and Edward Gibson. 2015. Large-scale evidence of dependency length minimization in 37 languages. Proceedings of the National Academy of Sciences , 112(33):10336– 10341. Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences , 115(16):E3635–E3644. Andrew Gelman, Donald B Rubin, et al. 1992. In- ference from iterative simulation using multiple se- quences. Statistical science , 7(4):457–472. Kartik Goyal, Chris Dyer, and Taylor Berg-Kirkpatrick. 2021. Exposing the implicit energy networks behind masked language models via Metropolis–Hastings. arXiv preprint arXiv:2106.02736 . Thomas L Griffiths and Michael L Kalish. 2007. Lan- guage evolution by iterated learning with Bayesian agents. Cognitive Science , 31(3):441–480. Daniel Grodner and Edward Gibson. 2005. Conse- quences of the serial nature of linguistic input for sentenial complexity. Cognitive Science , 29(2):261– 290. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) , 51(5):1– 42. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of NAACL-HLT , page 1195–1205. Peter Harrison, Raja Marjieh, Federico Adolfi, Pol van Rijn, Manuel Anglada-Tort, Ofer Tchernichovski, Pauline Larrouy-Maestri, and Nori Jacoby. 2020.Gibbs sampling with people. Advances in Neural Information Processing Systems , 33. Robert Hawkins, Takateru Yamakoshi, Thomas Grif- fiths, and Adele Goldberg. 2020. Investigating rep- resentations of verb bias in neural language models. InProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 4653–4663. Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation , pages 187–197. David Heckerman, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, and Carl Kadie. 2000. Dependency networks for inference, collaborative filtering, and data visualization. Jour- nal of Machine Learning Research , 1(Oct):49–75. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text de- generation. International Conference on Learning Representations . Daphne Ippolito, Daniel Duckworth, Chris Callison- Burch, and Douglas Eck. 2020. Automatic detec- tion of generated text is easiest when humans are fooled. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics , pages 1808–1822. Michael L Kalish, Thomas L Griffiths, and Stephan Lewandowsky. 2007. Iterated learning: Intergener- ational knowledge transmission reveals inductive bi- ases. Psychonomic Bulletin & Review , 14(2):288– 294. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In 1995 international conference on acoustics, speech, and signal processing , volume 1, pages 181–184. IEEE. Max Kuhn and Kjell Johnson. 2013. An introduction to feature selection. In Applied predictive modeling , pages 487–519. Springer. Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, and Kentaro Inui. 2021. Lower perplexity is not always human-like. In Pro- ceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 5203– 5217. Thomas A Langlois, Nori Jacoby, Jordan W Suchow, and Thomas L Griffiths. 2021. Serial reproduction reveals the geometry of visuospatial representations. Proceedings of the National Academy of Sciences , 118(13). Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics , pages 7871–7880. Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Ja- son Weston. 2020. Don’t say that! making inconsis- tent dialogue unlikely with unlikelihood training. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 4715– 4728. Tal Linzen and Marco Baroni. 2021. Syntactic struc- ture from deep learning. Annual Review of Linguis- tics, 7(1). David McAllester. 2019. A consistency theorem for BERT. Retrieved November 1, 2021 from https: //machinethoughts.wordpress.com/2019/ 07/14/a-consistency-theorem-for-bert/ . Clara Meister and Ryan Cotterell. 2021. Language model evaluation beyond perplexity. In Proceedings of the 59th Annual Meeting of the ACL , pages 5328– 5339. Stephan C Meylan, Sathvik Nair, and Thomas L Grif- fiths. 2021. Evaluating models of robust word recognition with serial reproduction. Cognition , 210:104553. Jennifer Neville and David Jensen. 2007. Relational dependency networks. Journal of Machine Learning Research , 8(3):653–692. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap be- tween neural text and human text using divergence frontiers. Advances in Neural Information Process- ing Systems , 34. Anna N Rafferty, Thomas L Griffiths, and Dan Klein. 2014. Analyzing the rate at which languages lose the influence of a common ancestor. Cognitive Sci- ence, 38(7):1406–1431. Julian Salazar, Davis Liang, Toan Q. Nguyen, and Ka- trin Kirchhoff. 2020. Masked language model scor- ing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 2699–2712. Adam N Sanborn, Thomas L Griffiths, and Richard M Shiffrin. 2010. Uncovering mental representations with Markov chain Monte Carlo. Cognitive psychol- ogy, 60(2):63–106. Janelle Shane. 2019. You look like a thing and I love you. Hachette UK. Shuntaro Takahashi and Kumiko Tanaka-Ishii. 2017. Do neural nets learn statistical laws behind natural language? PloS one , 12(12):e0189326.Shuntaro Takahashi and Kumiko Tanaka-Ishii. 2019. Evaluating computational language models with scaling properties of natural language. Computa- tional Linguistics , 45(3):481–513. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of ACL , page 4593–4601. Kristina Toutanova, Dan Klein, Christopher D Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. InProceedings of the 2003 Human Language Tech- nology Conference of the North American Chapter of the Association for Computational Linguistics , pages 252–259. Alex Wang and Kyunghyun Cho. 2019. BERT has a mouth, and it must speak: BERT as a Markov Ran- dom Field language model. In Proceedings of the Workshop on Methods for Optimizing and Evaluat- ing Neural Language Generation (NeuralGen) , page 30–36. Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R Bowman. 2020. Blimp: The benchmark of linguis- tic minimal pairs for english. Transactions of the As- sociation for Computational Linguistics , 8:377–392. Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics , 7:625–641. Jing Xu and Thomas L Griffiths. 2010. A rational anal- ysis of the effects of memory biases on serial repro- duction. Cognitive psychology , 60(2):107–126. Appendix A: Baseline details Wikipedia sentences were randomly selected from the full sentencized corpus English Wikipedia that tokenized to 12, 21, and 37 WordPiece tokens for the short, medium, and long conditions, respec- tively. These sentences were also chosen to span a broad range of sentence probabilities under BERT (i.e.logP(p1;:::;p n) =P klogP(pkjpk)). For our ngram baseline, we trained a 5-gram model with Kneser-Ney smoothing (Kneser and Ney, 1995) on English Wikipedia using the kenlm library (Heafield, 2011), and generated sentences of length 10 by sampling from the resulting condi- tional distributions. Because this model stripped punctuation, and was therefore unable to emit an “end of sentence” token, we expected it to serve as a lower bound on the naturalness scale. For our LSTM baseline, we used the network pre-trained by Gulordava et al. (2018) on English Wikipedia. This model was trained to emit an end of sentence ( <eos> ) token, allowing us to rejec- tion sample to obtain sentences that were exactly 10 words long with no unknown words (i.e. <unk> to- kens). Because it was not trained with a <start> token, however, we needed to initialize it with the initial word of the sentence. We randomly selected this initial word from a small set of common sen- tence openers (e.g. the,a,it,his,her). As a result of our initial token selection, this model does not precisely sample from its true prior over sen- tences. Thus, it is best viewed as another baseline of sentences rather than as a careful architectural comparison. Because we were asking participants to judge the naturalness of complete sentences , we did not want to include samples which clearly violated sen- tencehood, as these would not be informative (e.g. fragments from Wikipedia that were incorrectly sentencized and ended with an abbreviation, bibli- ographic text like “korsakov (1976) r.s.,” or table markdown with pipes like “ jajbj”). We automat- ically removed any sentences containing pipes or ending with colons or semicolons, as these were associated with sentencizer inconsistency, as well as sequences that contained multiple sentences (ac- cording to our sentencizer). Finally, the authors took a manual pass to exclude other non-sentential fragments from the stimulus set. Appendix B: Corpus details We downloaded cleaned Wikipedia data provided by GluonNLP (https://github.com/dmlc/gluon- nlp/tree/master/scripts/datasets/pretrain_corpus), and BookCorpus data from HuggingFace Datasets (https://huggingface.co/datasets/bookcorpus). −60−40−200 0100200300 epochsentence scoreFigure S1: We examine the convergence time by initial- izing different chains at different classes of sentences (red is high probability under BERT’s energy function, blue is low probability). Faint lines show smoothed tra- jectories for individual chains and error bars are boot- strapped 95% confidence intervals across chains. no mixture MH mixture GSN mixture0.00.20.40.6 010002000300040005000 lag (epochs)auto−correlation Figure S2: MCMC methods like GSN and MH sam- pling tend to get stuck in local regions with high auto- correlation. We find that a minimal autocorrelation is achievable with lower lag (500 epochs between sam- ples) using a mixture kernel with a constant probabil- ity of resetting the chain. Error ribbons are 95% confi- dence intervals. term estimate std.error statistic p.value 1 (Intercept) 67.33 1.14 59.08 <0:001 2 short vs. long (GSN) -14.49 1.60 -9.08 <0:001 3 short vs. medium (GSN) -10.21 1.60 -6.39 <0:001 5 GSN vs. LSTM (short) -28.60 2.04 -14.05 <0:001 6 GSN vs. MH (short) -14.76 1.59 -9.26 <0:001 7 GSN vs. ngram (short) -54.26 2.00 -27.07 <0:001 8 GSN vs. wiki (short) 10.40 1.70 6.13 <0:001 13 interaction (short vs. long; GSN vs. MH) -12.31 2.23 -5.51 <0:001 14 interaction (short vs. medium; GSN vs. MH) -7.33 2.23 -3.29 <0:001 17 interaction (short vs. long; GSN vs. wiki) 11.22 2.39 4.70 <0:001 18 interaction (short vs. medium; GSN vs. wiki) 5.56 2.37 2.35 0.02 Table S1: Fixed effect estimates for regression on human scores. Length class and source are dummy coded with short lengths and GSN as baselines. base mixture 01000020000300004000050000010000200003000040000500000.000.250.500.751.00 stepedit rate Figure S3: Without mixing in a constant probability of returning to the initial distribution, the GSN chain (and MH chain, not shown) goes through periods of stasis with low edit rates (red curves), contributing to high autocorrela- tions. Figure S4: Empirical distribution of sentence lengths in Wikipedia and BookCorpus training corpora, after Word- Piece tokenization. For our corpus comparisons, we selected the modal Wikipedia sentence length of 21 tokens and the modal BookCorpus length of 11 tokens. For our human judgment experiment, we included baseline sentences only from Wikipedia for shorter (12 tokens) and longer sentences (37 tokens), with roughly equal prevalence in the corpus (orange dots). types of unnaturalness examples character-level He preened on a Ldrink of copper. phrase-level semanticThe little wattled songbird, also called the Chink Warbler, Orange Garver or Quickcumber is a socially luscious and habituated bird species. sentence-levelconstruction There were two hours before he made the walk . out-of-context word No need to focus on bicycling . self-contradictoryThe symbols ()read as ()and()are written as ( ), not as (). repetitionThe college of arts and sciences, adjacent to the business school, is majoring in business. He saw Cronus and Cronus, Cronus and James Cronus he saw Cronus and Cronus and Cronus and Cronus Cronus when he saw Cronus. Table S2: More examples of sentences from BERT’s prior with low naturalness ratings. sub-word tokens word tokens BERT chains corpus textFigure S5: There is a misalignment between the space of sentences obtainable by a BERT chain of a fixed to- ken length (in sub-word tokens) and natural language sentences of a fixed length (in words). We consider the distribution of corpus sentences that are obtainable from a fixed-length BERT chain, which may decode to different lengths in natural text (black arrows). 0.0000.0250.0500.0750.100 01020304050 dependency lengthdensitycombined corporagibbs mh Figure S6: Dependency distances are similar for sen- tences sampled from BERT’s prior and sentences from its training corpus, but the BERT distribution is more bimodal and tends to skew simpler. long medium shorthigh low 0 2 4 60 2 4 60 2 4 6406080 406080 log1p(step)empirical_stat_GSN_earlyFigure S7: Sentences gradually drift away from the initial distribution across the burn-in period. Light green region represents the 95% confidence interval for the mean naturalness of Wikipedia sentences while grey region represents the same interval around the stationary distribution of the converged chain. Top row represents chains that are initialized at high-probability states, while bottom row is initialized in low-probability states.
e71e4f64-0695-43f7-a4af-b57f47ad2915
trentmkelly/LessWrong-43k
LessWrong
Three ways that "Sufficiently optimized agents appear coherent" can be false There has been a couple of recent posts suggesting that Eliezer Yudkowsky's Sufficiently optimized agents appear coherent thesis does not seem useful because it's vacuously true: one obvious way to formalize "coherent" implies that all agents can be considered coherent. In a previous comment, I suggested that we can formalize "coherent" in a different way to dodge this criticism. I believe there's reason to think that Eliezer never intended "Sufficiently optimized agents appear coherent" to have an airtight argument and be universally true. (The Arbital post contains a number of caveats, including "If there is a particular kind of optimization pressure that seems sufficient to produce a cognitively highly advanced agent, but which also seems sure to overlook some particular form of incoherence, then this would present a loophole in the overall argument and yield a route by which an advanced agent with that particular incoherence might be produced".) In this post, I suggest that considering the ways in which it could be false can be a useful way to frame some recent ideas in AI safety. (Note that this isn't intended to be an exhaustive list.) Distributional shift Even a very powerful optimization process cannot train or test an agent in every possible environment and for every possible scenario (by this I mean some sequence of inputs) that it might face, and some optimization processes may not care about many possible environments/scenarios. Given this, we can expect that if an agent faces a new environment/scenario that's very different from what is was optimized for, it may fail to behave coherently. (Jessica Taylor made a related point in Modeling the capabilities of advanced AI systems as episodic reinforcement learning: "When the test episode is similar to training episodes (e.g. in an online learning context), we should expect trained policies to act like a rational agent maximizing its expected score in this test episode; otherwise, the policy that acts as
1077b626-1b77-460f-930f-cc5fcf5bc242
trentmkelly/LessWrong-43k
LessWrong
D&D.Sci Alchemy: Archmage Anachronos and the Supply Chain Issues This is an entry in the 'Dungeons & Data Science' series, a set of puzzles where players are given a dataset to analyze and an objective to pursue using information from that dataset. After talking with abstractapplic, I've stolen the June 7th scenario slot from him.  I hope that this scenario should be relatively simple: not as easy as abstractapplic's recent introductory scenario, but I still think this would be a fairly approachable starting point if you're new to D&D.Sci. STORY Archmage Anachronos stares at you from beneath his bushy eyebrows.  "I have called you to beseech your aid as the greatest living practitioner of the Ancient and Forbidden Art of...Data Science." "What, my eyes?  No, they've always been like that."  (Image generated using OpenArt) (You've tried many times to explain to Archmage Anachronos that Data Science is neither Ancient nor Forbidden, and in fact that the Calantha Institute of Technology and Thaumaturgy has regular classes in Data Science that he could just attend.  It hasn't seemed to work.  Ever since you used Data Science to help him locate the lair of the Loathsome Lich, he's decided that it must be a mighty power indeed, and that you must be a great wizard of some sort to be able to use it.  Or maybe he just enjoys being dramatic.) You wait to hear what the problem is.  Is the world being swallowed by darkness?  Have the Elemental Lords reawoken and begun subjugating nations?  What dread occasion has led him to seek the aid of one who wields so perilous an art[1]? He tells you that he needs your help brewing some Barkskin Potion. (One side effect of Archmage Anachronos's personality - a love of ridiculous drama, a penchant for overcomplicated schemes, and a strong tendency to frequently disappear to conduct secretive 'archmage business' - is that it is hard to tell whether this is actually as unimportant a matter as it sounds, or whether there is some vitally important objective he needs this potion for.) With his great
ccedadfc-e316-4625-ae7b-8fd6d676c133
trentmkelly/LessWrong-43k
LessWrong
Patient Observation When a person first begins to study naturalism with me, I say to them almost nothing that I’ve written in this sequence.  That might very well be a mistake; I might get better results if I knew some short combination of words that would cause them to correctly understand, intellectually, what we are doing and why, before we started. But in fact I bank on the person’s trust in me a bit, and begin by helping them establish consistent habits of observation.  If they’re interested in studying confusion, I ask them to tap their leg every time they notice they’re confused. I ask them to keep a log book in which they record a few words about their experiences of confusion each day. I ask them to make predictions about what it will be like to notice confusion—what kind of situation will be happening and how they will know in the moment that they are confused—and to compare their observations to their predictions. And then, throughout what has so far proven to be about a three month program, I never shift our focus away from consistent habits of observation. It’s not just where I start. It’s the entire curriculum. From a practical perspective, this dogged persistence is the foundation of naturalism. “Direct observation of the territory”, without patience, gets you something like a bag of tricks. Valuable tricks, but still tricks. Isolated mental motions made when they are convenient and enjoyable, not when they are most needed. With patience, though, you get a life-long practice of epistemic rationality. So the whole naturalism program, from start to finish, consists of the establishment, improvement, and maintenance of consistent habits of observation. Consistent and ceaseless, without any sense that “and then we’ll be done observing and get back to normal”. When my students and I meet, we are constantly talking about what daily practice looked like over the past week, what was too heavy to implement as a regular routine, and how they personally can slow down enough
e043d115-8d27-4c3b-afb4-37d67fcc6e88
StampyAI/alignment-research-dataset/blogs
Blogs
Historic trends in particle accelerator performance *Published Feb 7 2020* None of particle energy, center-of-mass energy nor Lorentz factor achievable by particle accelerators appears to have undergone a discontinuity of more than ten years of progress at previous rates. Details ------- This case study is part of AI Impacts’ [discontinuous progress investigation](https://aiimpacts.org/discontinuous-progress-investigation/). ### Background [Particle accelerators](https://en.wikipedia.org/wiki/Particle_accelerator) propel charged particles at high speeds, typically so that experiments can be conducted on them.[1](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-1-1350 "&#8220;A particle accelerator is a machine that uses electromagnetic fields to propel charged particles to very high speeds and energies, and to contain them in well-defined beams. Large accelerators are used for basic research in particle physics.&#8221; &#8211; &#8220;Particle Accelerator&#8221;. 2019.&nbsp;<em>En.Wikipedia.Org</em>. Accessed June 30 2019. https://en.wikipedia.org/w/index.php?title=Particle_accelerator&amp;oldid=903597299.") ![](https://lh5.googleusercontent.com/QUDe8-AID5ih4W5Sry27HEP7i9QRiMtxwqVyieEIiDKQPjDARIwL3Ud6XNGRCioHfQXDkqtXpQg3_Xun8BQGwfaZVTlKMGRh9sjhWcyV1nv-oG3a76n0jx_pQ1ZJYLb9aoDIvGv7)Fermi National Laboratory particle accelerator[2](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-2-1350 "<a href=\"https://commons.wikimedia.org/wiki/File:Fermilab.jpg\">From Wikimedia Commons:</a> <br><strong>Fermilab, Reidar Hahn [Public domain]</strong> ") ### Trends Our understanding is that key performance metrics for particle accelerators include how much kinetic energy they can generate in particles, how much center-of-mass energy they can create in collisions between particles, and the [Lorentz factor](https://en.wikipedia.org/wiki/Lorentz_factor) they can achieve. ‘Livingston charts’ show progress in particle accelerator efficacy over time, and seem to be common. We took data from a relatively recent and populated one in a slide deck from a Cornell accelerator physics course (see slide 45),[3](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-3-1350 "Hoffstaetter 2019. <em>Classe.Cornell.Edu</em>. Accessed June 30 2019. https://www.classe.cornell.edu/~hoff/LECTURES/10USPAS/notes01.pdf., Slide 45") and extracted data from it, shown in [this spreadsheet](https://docs.google.com/spreadsheets/d/1fsEV5vtdArk6Q0RqAacmrErDif3x7O1JcPOn3BqIBY0/edit?usp=sharing) (see columns ‘year’ and ‘eV’ in tabs ‘Hoffstaetter Hadrons’ and ‘Hoffstaetter Leptons’ for original data). The standard performance metric in a Livingston chart is ‘energy needed for a particle to hit a stationary proton with the same center of mass energy as the actual collisions in the accelerator’. We are uncertain why this metric is used, though it does allow for comparisons to earlier technology in a way that CM energy does not. We used a Lorentz transform to obtain particle energy, center-of-mass energy, and Lorentz factors from the Livingston chart data.[4](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-4-1350 "A <a href=\"https://en.wikipedia.org/wiki/Lorentz_transformation\">Lorentz transform</a> allows us to recalculate velocities with a changed frame of reference, taking into account special relativity, which is a material consideration for such fast-moving objects. </p> <p>See Rick Korzekwa&#8217;s explanation of his calculation of this here: <a href=\"https://docs.google.com/document/d/1Nv-0Jg6lMNobcDbfuruLwA8hYCXiPvq32f3NHz0BCLs/edit?usp=sharing\">https://docs.google.com/document/d/1Nv-0Jg6lMNobcDbfuruLwA8hYCXiPvq32f3NHz0BCLs/edit?usp=sharing</a>") #### Particle energy ##### Data Figure 1 shows our data on particle energy over time, also available in [our spreadsheet](https://docs.google.com/spreadsheets/d/1fsEV5vtdArk6Q0RqAacmrErDif3x7O1JcPOn3BqIBY0/edit?usp=sharing), tab ‘Particle energy’. ![](https://aiimpacts.org/wp-content/uploads/2019/11/ParticleEnergy-1024x768.png)Figure 1: Particle energy in eV over time ##### Discontinuity measurement We chose to model the data as a single exponential trend.[5](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-5-1350 "See <a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/#trend-fitting\">our methodology page</a> for more details.") There are no greater than 10-year discontinuities in particle energy at previous rates within this trend.[6](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-6-1350 "See <a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/#discontinuity-measurement\">our methodology page</a> for more details, and <a href=\"https://docs.google.com/spreadsheets/d/1fsEV5vtdArk6Q0RqAacmrErDif3x7O1JcPOn3BqIBY0/edit?usp=sharing\">our spreadsheet</a>, tab &#8216;Particle energy&#8217; for our calculation of discontinuities.") #### Center-of-mass energy ##### Data Figure 2 shows our data on center-of-mass energy over time, also available in [this spreadsheet](https://docs.google.com/spreadsheets/d/1fsEV5vtdArk6Q0RqAacmrErDif3x7O1JcPOn3BqIBY0/edit?usp=sharing), tab ‘CM energy’. ![](https://aiimpacts.org/wp-content/uploads/2019/11/CMEnergy-1024x768.png)Figure 2: Center-of-mass energy in eV over time ##### Discontinuity measurement We treated the data as exponential.[7](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-7-1350 "See <strong><a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/#trend-fitting\">our methodology page</a></strong> for more details.") There are no greater than 10-year discontinuities in center-of-mass energy at previous rates within this trend.[8](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-8-1350 "See <strong><a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/#discontinuity-measurement\">our methodology page</a></strong> for more details, and <a href=\"https://docs.google.com/spreadsheets/d/1fsEV5vtdArk6Q0RqAacmrErDif3x7O1JcPOn3BqIBY0/edit?usp=sharing\"><strong>this spreadsheet</strong></a>, tab &#8216;CM energy&#8217; for our calculation of discontinuities.") #### Lorentz factor According to Wikipedia, ‘[The Lorentz factor](https://en.wikipedia.org/wiki/Lorentz_factor) or Lorentz term is the factor by which time, length, and relativistic mass change for an object while that object is moving.’ ##### Data [This spreadsheet](https://docs.google.com/spreadsheets/d/1fsEV5vtdArk6Q0RqAacmrErDif3x7O1JcPOn3BqIBY0/edit#gid=654930714), tab ‘Lorentz factor’, shows our calculated data for progress on Lorentz factors attained over time. ![](https://aiimpacts.org/wp-content/uploads/2020/02/LorentzFactor-1024x768.png) Figure 3: Lorentz factor (gamma) over time. ##### Discontinuity measurement We treated the data as one exponential trend.[9](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-9-1350 "See <strong><a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/#trend-fitting\">our methodology page</a></strong> for more details.") There were no greater than 10-year discontinuities at previous rates within this trend.[10](https://aiimpacts.org/particle-accelerator-performance-progress/#easy-footnote-bottom-10-1350 "See <strong><a href=\"https://aiimpacts.org/methodology-for-discontinuity-investigation/#discontinuity-measurement\">our methodology page</a></strong> for more details, and <a href=\"https://docs.google.com/spreadsheets/d/1fsEV5vtdArk6Q0RqAacmrErDif3x7O1JcPOn3BqIBY0/edit?usp=sharing\"><strong>our spreadsheet</strong></a>, tab &#8216;Lorentz factor&#8217; for our calculation of discontinuities.") *Primary author: Rick Korzekwa* Notes -----
14666d53-704f-4588-9b17-a208f72929ef
StampyAI/alignment-research-dataset/arxiv
Arxiv
A Comparative Analysis of Expected and Distributional Reinforcement Learning 1 Introduction --------------- The distributional perspective, in which one models the distribution of returns from a state instead of only its expected value, was recently introduced by (?). The first distributional reinforcement learning algorithm, C51, saw dramatic improvements in performance in many Atari 2600 games when compared to an algorithm that only modelled expected values (?). Since then, additional distributional algorithms have been proposed, such as quantile regression (?) and implicit quantile networks (?), with many of these improving on the results of C51. The abundance of empirical results make it hard to dispute that taking the distributional perspective is helpful in deep reinforcement learning problems, but theoretical motivation for this perspective is comparatively scarce. Possible reasons for this include the following, proposed by (?) . 1. 1. Reduced chattering: modeling a distribution may reduce prediction variance, which may help in policy iteration. 2. 2. Improved optimization behaviour: distributions may present a more stable learning target, or in some cases (e.g. the softmax distribution used in the C51 algorithm) have a regularizing effect in optimization for neural networks. 3. 3. Auxiliary tasks: the distribution offers a richer set of predictions for learning, serving as a set of auxiliary tasks which is tightly coupled to the reward. Initial efforts to provide a theoretical framework for the analysis of distributional algorithms demonstrated their convergence properties (?), and did not directly compare their expected performance to expected algorithms. Indeed, even experimental results supporting the distributional perspective have largely been restricted to the deep reinforcement learning setting, and it is not clear whether the benefits of the distributional perspective also hold in simpler tasks. In this paper we continue lifting the veil on this mystery by investigating the behavioural differences between distributional and expected RL, and whether these behavioural differences necessarily result in an advantage for distributional methods. 2 Background ------------- We model the reinforcement learning problem as an agent interacting with an environment so as to maximize cumulative discounted reward. We formalize the notion of an environment with a Markov Decision Process (MDP) defined as the tuple (𝒳,𝒜,R,P,γ)𝒳𝒜𝑅𝑃𝛾(\mathcal{X},\mathcal{A},R,P,\gamma)( caligraphic\_X , caligraphic\_A , italic\_R , italic\_P , italic\_γ ), where 𝒳𝒳\mathcal{X}caligraphic\_X denotes the state space, 𝒜𝒜\mathcal{A}caligraphic\_A the set of possible actions, R:𝒳×𝒜→Dist([−RMAX,RMAX]):𝑅→𝒳𝒜𝐷𝑖𝑠𝑡subscript𝑅𝑀𝐴𝑋subscript𝑅𝑀𝐴𝑋R:\mathcal{X}\times\mathcal{A}\rightarrow Dist([-R\_{MAX},R\_{MAX}])italic\_R : caligraphic\_X × caligraphic\_A → italic\_D italic\_i italic\_s italic\_t ( [ - italic\_R start\_POSTSUBSCRIPT italic\_M italic\_A italic\_X end\_POSTSUBSCRIPT , italic\_R start\_POSTSUBSCRIPT italic\_M italic\_A italic\_X end\_POSTSUBSCRIPT ] ) is a stochastic reward function mapping state-action pairs to a distribution over a set of bounded rewards, P𝑃Pitalic\_P the transition probability kernel, and γ∈[0,1)𝛾01\gamma\in[0,1)italic\_γ ∈ [ 0 , 1 ) the discount factor. We denote by π:𝒳→Dist(𝒜):𝜋→𝒳𝐷𝑖𝑠𝑡𝒜\pi:\mathcal{X}\rightarrow Dist(\mathcal{A})italic\_π : caligraphic\_X → italic\_D italic\_i italic\_s italic\_t ( caligraphic\_A ) a stochastic policy mapping states to a distribution over actions (i.e. π(a|x)𝜋conditional𝑎𝑥\pi(a|x)italic\_π ( italic\_a | italic\_x ) is the agent’s probability of choosing action a𝑎aitalic\_a in state x𝑥xitalic\_x). We will use the notation Q𝑄Qitalic\_Q to refer to state-action value function, which has the type Q:𝒳×𝒜→ℝ:𝑄→𝒳𝒜ℝQ:\mathcal{X}\times\mathcal{A}\rightarrow\mathbb{R}italic\_Q : caligraphic\_X × caligraphic\_A → blackboard\_R. The value of a specific policy π𝜋\piitalic\_π is given by the value function Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, defined as the discounted sum of expected future rewards after choosing action a𝑎aitalic\_a from state s𝑠sitalic\_s and then following π𝜋\piitalic\_π | | | | | --- | --- | --- | | | Qπ(x,a):=𝔼π,P[∑t=0∞γtR(xt,at)|x0=x,a0=a].assignsuperscript𝑄𝜋𝑥𝑎subscript𝔼𝜋𝑃delimited-[]formulae-sequenceconditionalsuperscriptsubscript𝑡0superscript𝛾𝑡𝑅subscript𝑥𝑡subscript𝑎𝑡subscript𝑥0𝑥subscript𝑎0𝑎Q^{\pi}(x,a):=\mathbb{E}\_{\pi,P}\bigg{[}\sum\_{t=0}^{\infty}\gamma^{t}R(x\_{t},a\_{t})\bigg{|}x\_{0}=x,a\_{0}=a\bigg{]}.italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ) := blackboard\_E start\_POSTSUBSCRIPT italic\_π , italic\_P end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_R ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_x , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_a ] . | | One can also express this value as the fixed point of the Bellman operator Tπsuperscript𝑇𝜋T^{\pi}italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT (?), defined as | | | | | | --- | --- | --- | --- | | | TπQ(x,a):=assignsuperscript𝑇𝜋𝑄𝑥𝑎absent\displaystyle T^{\pi}Q(x,a):=italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q ( italic\_x , italic\_a ) := | 𝔼[R(x,a)]𝔼delimited-[]𝑅𝑥𝑎\displaystyle\mathbb{E}[R(x,a)]blackboard\_E [ italic\_R ( italic\_x , italic\_a ) ] | | | | | +γ∑x′,a′P(x′|x,a)π(a′|x′)Q(x′,a′).𝛾subscriptsuperscript𝑥′superscript𝑎′𝑃conditionalsuperscript𝑥′𝑥𝑎𝜋conditionalsuperscript𝑎′superscript𝑥′𝑄superscript𝑥′superscript𝑎′\displaystyle+\gamma\sum\_{x^{\prime},a^{\prime}}P(x^{\prime}|x,a)\pi(a^{\prime}|x^{\prime})Q(x^{\prime},a^{\prime}).+ italic\_γ ∑ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_P ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_x , italic\_a ) italic\_π ( italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_Q ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) . | | The Bellman operator Tπsuperscript𝑇𝜋T^{\pi}italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT depends on the policy π𝜋\piitalic\_π, and is used in policy evaluation (?). When we seek to improve the current policy, we enter the control setting. In this setting, we modify the previous Bellman operator to obtain the Bellman optimality operator T\*superscript𝑇T^{\*}italic\_T start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT, given by | | | | | --- | --- | --- | | | T\*Q(x,a):=𝔼[R(x,a)]+γ∑x′P(x′|x,a)maxa′⁡Q(x′,a′).assignsuperscript𝑇𝑄𝑥𝑎𝔼delimited-[]𝑅𝑥𝑎𝛾subscriptsuperscript𝑥′𝑃conditionalsuperscript𝑥′𝑥𝑎subscriptsuperscript𝑎′𝑄superscript𝑥′superscript𝑎′T^{\*}Q(x,a):=\mathbb{E}[R(x,a)]+\gamma\sum\_{x^{\prime}}P(x^{\prime}|x,a)\max\_{a^{\prime}}Q(x^{\prime},a^{\prime}).italic\_T start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT italic\_Q ( italic\_x , italic\_a ) := blackboard\_E [ italic\_R ( italic\_x , italic\_a ) ] + italic\_γ ∑ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_P ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_x , italic\_a ) roman\_max start\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_Q ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) . | | In many problems of interest, we do not have a full model of the MDP, and instead use a family of stochastic versions of the Bellman operator called temporal difference (TD) updates (?). We will focus on the SARSA update (?), defined as follows. We fix a policy π𝜋\piitalic\_π and let (xt,at,rt,xt+1,at+1)subscript𝑥𝑡subscript𝑎𝑡subscript𝑟𝑡subscript𝑥𝑡1subscript𝑎𝑡1(x\_{t},a\_{t},r\_{t},x\_{t+1},a\_{t+1})( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) be a sampled transition from the MDP, where rt∼R(xt,at)similar-tosubscript𝑟𝑡𝑅subscript𝑥𝑡subscript𝑎𝑡r\_{t}\sim R(x\_{t},a\_{t})italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_R ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) is a realized reward and at+1∼π(⋅|xt+1)a\_{t+1}\sim\pi(\cdot|x\_{t+1})italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_π ( ⋅ | italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ). We let αtsubscript𝛼𝑡\alpha\_{t}italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT be a step size parameter for time t𝑡titalic\_t. Then given a value function estimate Qt:𝒳×𝒜→ℝ:subscript𝑄𝑡→𝒳𝒜ℝQ\_{t}:\mathcal{X}\times\mathcal{A}\rightarrow\mathbb{R}italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT : caligraphic\_X × caligraphic\_A → blackboard\_R at time t𝑡titalic\_t, the SARSA update gives the new estimate Qt+1subscript𝑄𝑡1Q\_{t+1}italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT: | | | | | | --- | --- | --- | --- | | | Qt+1(xt,at)=(1−αt)Qt(xt,at)+αt(rt+γQt(xt+1,at+1)).subscript𝑄𝑡1subscript𝑥𝑡subscript𝑎𝑡1subscript𝛼𝑡subscript𝑄𝑡subscript𝑥𝑡subscript𝑎𝑡subscript𝛼𝑡subscript𝑟𝑡𝛾subscript𝑄𝑡subscript𝑥𝑡1subscript𝑎𝑡1Q\_{t+1}(x\_{t},a\_{t})=(1-\alpha\_{t})Q\_{t}(x\_{t},a\_{t})+\alpha\_{t}(r\_{t}+\gamma Q\_{t}(x\_{t+1},a\_{t+1})).italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = ( 1 - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ) . | | (1) | Under certain conditions on the MDP and αtsubscript𝛼𝑡\alpha\_{t}italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, SARSA converges to Qπsuperscript𝑄𝜋Q^{\pi}italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT (?). Semi-gradient SARSA updates extend the SARSA update from the tabular to the function approximation setting. We consider a parameter vector θtsubscript𝜃𝑡\theta\_{t}italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and feature vectors ϕx,asubscriptitalic-ϕ𝑥𝑎\phi\_{x,a}italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT for each (x,a)∈𝒳×𝒜𝑥𝑎𝒳𝒜(x,a)\in\mathcal{X}\times\mathcal{A}( italic\_x , italic\_a ) ∈ caligraphic\_X × caligraphic\_A such that | | | | | --- | --- | --- | | | Qt(x,a)=θtTϕx,a.subscript𝑄𝑡𝑥𝑎superscriptsubscript𝜃𝑡𝑇subscriptitalic-ϕ𝑥𝑎Q\_{t}(x,a)=\theta\_{t}^{T}\phi\_{x,a}.italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) = italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT . | | Given θtsubscript𝜃𝑡\theta\_{t}italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, θt+1subscript𝜃𝑡1\theta\_{t+1}italic\_θ start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is given by the semigradient update (?) | | | | | | --- | --- | --- | --- | | | θt+1:=θt−αt(θtTϕxt,at−rt+γθtTϕxt+1,at+1)ϕxt,at.assignsubscript𝜃𝑡1subscript𝜃𝑡subscript𝛼𝑡superscriptsubscript𝜃𝑡𝑇subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡subscript𝑟𝑡𝛾superscriptsubscript𝜃𝑡𝑇subscriptitalic-ϕsubscript𝑥𝑡1subscript𝑎𝑡1subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡\theta\_{t+1}:=\theta\_{t}-\alpha\_{t}(\theta\_{t}^{T}\phi\_{x\_{t},a\_{t}}-r\_{t}+\gamma\theta\_{t}^{T}\phi\_{x\_{t+1},a\_{t+1}})\phi\_{x\_{t},a\_{t}}.italic\_θ start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT . | | (2) | Instead of considering only the expected return from a state-action pair, one can consider the distribution of returns. We will use the notation Z:𝒳×𝒜→𝐷𝑖𝑠𝑡(ℝ):𝑍→𝒳𝒜𝐷𝑖𝑠𝑡ℝZ:\mathcal{X}\times\mathcal{A}\rightarrow\textit{Dist}(\mathbb{R})italic\_Z : caligraphic\_X × caligraphic\_A → Dist ( blackboard\_R ) to denote a return distribution function. We can then construct an analogous Bellman operator for these functions, as shown by (?) and termed the distributional Bellman operator, T𝒟πsubscriptsuperscript𝑇𝜋𝒟T^{\pi}\_{\mathcal{D}}italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT: | | | | | | --- | --- | --- | --- | | | T𝒟πZ(x,a)=𝐷R(x,a)+γZ(X′,A′)subscriptsuperscript𝑇𝜋𝒟𝑍𝑥𝑎𝐷𝑅𝑥𝑎𝛾𝑍superscript𝑋′superscript𝐴′T^{\pi}\_{\mathcal{D}}Z(x,a)\overset{D}{=}R(x,a)+\gamma Z(X^{\prime},A^{\prime})italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT italic\_Z ( italic\_x , italic\_a ) overitalic\_D start\_ARG = end\_ARG italic\_R ( italic\_x , italic\_a ) + italic\_γ italic\_Z ( italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | | (3) | where X′,A′superscript𝑋′superscript𝐴′X^{\prime},A^{\prime}italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT are the random variables corresponding to the next state and action. This is equality in distribution, and not an equality of random variables. Analogous to the expected Bellman operator, repeated application of the distributional Bellman operator can be shown to converge to the true distribution of returns (?). Later work showed the convergence of stochastic updates in the distributional setting (?). The proof of convergence of the distributional Bellman operator uses a contraction argument, which in the distributional setting requires us to be careful about how we define the distance between two return distribution functions. Probability divergences and metrics capture this notion of distance. The theoretical properties of some probability distribution metrics have been previously explored by (?), and the Wasserstein distance in particular studied further in the context of MDPs (?) as well as in the generative model literature (?). The Wasserstein distance also appears in the distributional reinforcement learning literature, but we omit its definition in favour of the related Cramér distance, whose properties make it more amenable to the tools we use in our analysis. The C51 algorithm uses the cross-entropy loss function to achieve promising performance in Atari games; however, the role of the cross-entropy loss in distributional RL has not been the subject of much theoretical analysis. We will use primarily the Cramér distance (?) in the results that follow, which has been studied in greater depth in the distributional RL literature. Motivations for the use of this distance have been previously outlined for generative models (?). ###### Definition 1 (Cramér Distance). Let P,Q𝑃𝑄P,Qitalic\_P , italic\_Q be two probability distributions with Cumulative Distribution Functions (CDFs) FP,FQsubscript𝐹𝑃subscript𝐹𝑄F\_{P},F\_{Q}italic\_F start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT , italic\_F start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT. The Cramér metric ℓ2subscriptnormal-ℓ2\ell\_{2}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT between P𝑃Pitalic\_P and Q𝑄Qitalic\_Q is defined as follows: | | | | | --- | --- | --- | | | ℓ2(P,Q)=∫ℝ(FP(x)−FQ(x))2𝑑xsubscriptℓ2𝑃𝑄subscriptℝsuperscriptsubscript𝐹𝑃𝑥subscript𝐹𝑄𝑥2differential-d𝑥\ell\_{2}(P,Q)=\sqrt{\int\_{\mathbb{R}}(F\_{P}(x)-F\_{Q}(x))^{2}dx}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_P , italic\_Q ) = square-root start\_ARG ∫ start\_POSTSUBSCRIPT blackboard\_R end\_POSTSUBSCRIPT ( italic\_F start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT ( italic\_x ) - italic\_F start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( italic\_x ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_d italic\_x end\_ARG | | We will overload notation and write equivalently ℓ2(P,Q)≡ℓ2(FP,FQ)subscriptnormal-ℓ2𝑃𝑄subscriptnormal-ℓ2subscript𝐹𝑃subscript𝐹𝑄\ell\_{2}(P,Q)\equiv\ell\_{2}(F\_{P},F\_{Q})roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_P , italic\_Q ) ≡ roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_F start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT , italic\_F start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ) or, when X𝑋Xitalic\_X and Y𝑌Yitalic\_Y are random variables with laws P𝑃Pitalic\_P and Q𝑄Qitalic\_Q, ℓ2(P,Q)≡ℓ2(X,Y)subscriptnormal-ℓ2𝑃𝑄subscriptnormal-ℓ2𝑋𝑌\ell\_{2}(P,Q)\equiv\ell\_{2}(X,Y)roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_P , italic\_Q ) ≡ roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_X , italic\_Y ). Practically, distributional reinforcement learning algorithms require that we approximate distributions. There are many ways one can do this, for example by predicting the quantiles of the return distribution (?). In our analysis we will focus on the class of categorical distributions with finite support. Given some fixed set 𝐳={z1,…,zK}∈ℝK𝐳subscript𝑧1…subscript𝑧𝐾superscriptℝ𝐾\mathbf{z}=\{z\_{1},\dots,z\_{K}\}\in\mathbb{R}^{K}bold\_z = { italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT } ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT with z1≤z2≤⋯≤zKsubscript𝑧1subscript𝑧2⋯subscript𝑧𝐾z\_{1}\leq z\_{2}\leq\dots\leq z\_{K}italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ≤ italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ≤ ⋯ ≤ italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT, a categorical distribution P𝑃Pitalic\_P with support 𝐳𝐳\mathbf{z}bold\_z is a mixture of Dirac measures on each of the zisubscript𝑧𝑖z\_{i}italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT’s, having the form | | | | | | --- | --- | --- | --- | | | P∈𝒵z:={∑i=1Kαiδzi:αi≥0,∑i=1Kαi=1}.𝑃subscript𝒵𝑧assignconditional-setsuperscriptsubscript𝑖1𝐾subscript𝛼𝑖subscript𝛿subscript𝑧𝑖formulae-sequencesubscript𝛼𝑖0superscriptsubscript𝑖1𝐾subscript𝛼𝑖1P\in\mathcal{Z}\_{z}:=\left\{\sum\_{i=1}^{K}\alpha\_{i}\delta\_{z\_{i}}:\alpha\_{i}\geq 0,\sum\_{i=1}^{K}\alpha\_{i}=1\right\}.italic\_P ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT := { ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT : italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≥ 0 , ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = 1 } . | | (4) | Under this class of distributions, the Cramér distance becomes a finite sum | | | | | --- | --- | --- | | | ℓ2(FP,FQ)=∑i=1K−1(zi+1−zi)(FP(zi)−FQ(zi))2subscriptℓ2subscript𝐹𝑃subscript𝐹𝑄superscriptsubscript𝑖1𝐾1subscript𝑧𝑖1subscript𝑧𝑖superscriptsubscript𝐹𝑃subscript𝑧𝑖subscript𝐹𝑄subscript𝑧𝑖2\ell\_{2}(F\_{P},F\_{Q})=\sqrt{\sum\_{i=1}^{K-1}(z\_{i+1}-z\_{i})(F\_{P}(z\_{i})-F\_{Q}(z\_{i}))^{2}}roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ( italic\_F start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT , italic\_F start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ) = square-root start\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K - 1 end\_POSTSUPERSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ( italic\_F start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - italic\_F start\_POSTSUBSCRIPT italic\_Q end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG | | which amounts to a weighted Euclidean norm between the CDFs of the two distributions. When the atoms of the support are equally spaced apart, we get a scalar multiple of the Euclidean distance between the vectors of the CDFs. We can use the Cramér distance to define a projection onto a fixed categorical support z (?). ###### Definition 2 (Cramér Projection). Let 𝐳𝐳\mathbf{z}bold\_z be an ordered set of K𝐾Kitalic\_K real numbers. For a Dirac measure δysubscript𝛿𝑦\delta\_{y}italic\_δ start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT, the Cramér projection ΠC(δy)subscriptnormal-Π𝐶subscript𝛿𝑦\Pi\_{C}(\delta\_{y})roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_δ start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT ) onto the support 𝐳𝐳\mathbf{z}bold\_z is given by: | | | | | --- | --- | --- | | | ΠC(δy)={δz1if y≤z1zi+1−yzi+1−ziδzi+y−zizi+1−ziδzi+1if zi<y≤zi+1δzKif y>zKsubscriptΠ𝐶subscript𝛿𝑦casessubscript𝛿subscript𝑧1if y≤z1subscript𝑧𝑖1𝑦subscript𝑧𝑖1subscript𝑧𝑖subscript𝛿subscript𝑧𝑖𝑦subscript𝑧𝑖subscript𝑧𝑖1subscript𝑧𝑖subscript𝛿subscript𝑧𝑖1if zi<y≤zi+1subscript𝛿subscript𝑧𝐾if y>zK\Pi\_{C}(\delta\_{y})=\begin{cases}\delta\_{z\_{1}}&\text{if $y\leq z\_{1}$}\\ \frac{z\_{i+1}-y}{z\_{i+1}-z\_{i}}\delta\_{z\_{i}}+\frac{y-z\_{i}}{z\_{i+1}-z\_{i}}\delta\_{z\_{i+1}}&\text{if $z\_{i}<y\leq z\_{i+1}$}\\ \delta\_{z\_{K}}&\text{if $y>z\_{K}$}\end{cases}roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_δ start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT ) = { start\_ROW start\_CELL italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_CELL start\_CELL if italic\_y ≤ italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL divide start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_y end\_ARG start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT + divide start\_ARG italic\_y - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_CELL start\_CELL if italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT < italic\_y ≤ italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_CELL start\_CELL if italic\_y > italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT end\_CELL end\_ROW | | The operator ΠCsubscriptΠ𝐶\Pi\_{C}roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT has two notable properties: first, as hinted by the name Cramér projection, it produces the distribution supported on 𝐳𝐳\mathbf{z}bold\_z which minimizes the Cramér distance to the original distribution. Second, if the support of the distribution is contained in the interval [z1,zK]subscript𝑧1subscript𝑧𝐾[z\_{1},z\_{K}][ italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ], we can show that the Cramér projection preserves the distribution’s expected value [1](#Thmprop1a "Proposition 1. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning"). It is thus a natural approximation tool for categorical distributions. ###### Proposition 1. Let 𝐳∈ℝk𝐳superscriptℝ𝑘\mathbf{z}\in\mathbb{R}^{k}bold\_z ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT, and P𝑃Pitalic\_P be a mixture of Dirac distributions (see Eq. [4](#S2.E4 "4 ‣ 2 Background ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning")) whose support is contained in the interval [z1,zK]subscript𝑧1subscript𝑧𝐾[z\_{1},z\_{K}][ italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ]. Then the Cramér projection ΠC(P)subscriptnormal-Π𝐶𝑃\Pi\_{C}(P)roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_P ) onto 𝐳𝐳\mathbf{z}bold\_z is such that | | | | | --- | --- | --- | | | 𝔼[ΠC(P)]=𝔼[P]𝔼delimited-[]subscriptΠ𝐶𝑃𝔼delimited-[]𝑃\mathbb{E}[\Pi\_{C}(P)]=\mathbb{E}[P]blackboard\_E [ roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_P ) ] = blackboard\_E [ italic\_P ] | | . The Cramér projection is implicit in the C51 algorithm, introduced by (?). The C51 algorithm uses a deep neural network to compute a softmax distribution, then updates its weights according to the cross-entropy loss between its prediction and a sampled target distribution, which is then projected onto the support of the predicted distribution using the Cramér projection. 3 The Coupled Updates Method ----------------------------- We are interested in the behavioural differences (or lack thereof) between distributional and expected RL algorithms. We will study these differences through a methodology where we couple the experience used by the update rules of the two algorithms. Under this methodology, we consider pairs of agents: one that learns a value function (the *expectation learner*) and one that learns a value distribution (the *distribution learner*). The output of the first learner is a sequence of value functions Q1,Q2,…subscript𝑄1subscript𝑄2…Q\_{1},Q\_{2},\dotsitalic\_Q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_Q start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , …. The output of the second is a sequence of value distributions Z1,Z2,…subscript𝑍1subscript𝑍2…Z\_{1},Z\_{2},\dotsitalic\_Z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_Z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , …. More precisely, each sequence is constructed via some update rule: | | | | | --- | --- | --- | | | Qt+1:=UE(Qt,ωt)Zt+1:=UD(Zt,ωt),formulae-sequenceassignsubscript𝑄𝑡1subscript𝑈𝐸subscript𝑄𝑡subscript𝜔𝑡assignsubscript𝑍𝑡1subscript𝑈𝐷subscript𝑍𝑡subscript𝜔𝑡Q\_{t+1}:=U\_{E}(Q\_{t},\omega\_{t})\qquad Z\_{t+1}:=U\_{D}(Z\_{t},\omega\_{t}),italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_U start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT ( italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_U start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT ( italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , | | from initial conditions Q0subscript𝑄0Q\_{0}italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and Z0subscript𝑍0Z\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, respectively, and where ωtsubscript𝜔𝑡\omega\_{t}italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is drawn from some sample space ΩΩ\Omegaroman\_Ω. These update rules may be deterministic (for example, an application of the Bellman operator) or random (a sample-based update such as TD-learning). Importantly, however, the two updates may be coupled through the common sample ωtsubscript𝜔𝑡\omega\_{t}italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Intuitively, ΩΩ\Omegaroman\_Ω can be thought of as the source of randomness in the MDP. Key to our analysis is to study update rules that are analogues of one another. If UEsubscript𝑈𝐸U\_{E}italic\_U start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT is the Bellman operator, for example, then UDsubscript𝑈𝐷U\_{D}italic\_U start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT is the distributional Bellman operator. More generally speaking, we will distinguish between *model-based* update rules, which do not depend on ωtsubscript𝜔𝑡\omega\_{t}italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and *sample-based* update rules, which do. In the latter case, we will assume access to a scheme that generates sample transitions based on the sequence ω1,ω2,…subscript𝜔1subscript𝜔2…\omega\_{1},\omega\_{2},\dotsitalic\_ω start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_ω start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , …, that is, a generator G:ω1,…,ωt↦(xt,at,rt,xt+1,at+1):𝐺maps-tosubscript𝜔1…subscript𝜔𝑡 subscript𝑥𝑡subscript𝑎𝑡subscript𝑟𝑡subscript𝑥𝑡1subscript𝑎𝑡1G:\omega\_{1},\dots,\omega\_{t}\mapsto(x\_{t},a\_{t},r\_{t},x\_{t+1},a\_{t+1})italic\_G : italic\_ω start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ↦ ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ). Under this scheme, a pair UE,UDsubscript𝑈𝐸subscript𝑈𝐷U\_{E},U\_{D}italic\_U start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT , italic\_U start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT of sampled-based update rules receive exactly the same sample transitions (for each possible realization); hence the *coupled updates method*, inspired by the notion of coupling from the probability theory literature (?). The main question we will answer is: *which analogue update rules preserve expectations?* Specifically, we write | | | | | --- | --- | --- | | | Z=𝔼Q⇔𝔼[Z(x,a)]=Q(x,a)∀(x,a)∈𝒳×𝒜.iff𝑍𝔼𝑄formulae-sequence𝔼delimited-[]𝑍𝑥𝑎𝑄𝑥𝑎for-all𝑥𝑎𝒳𝒜Z\overset{\mathbb{E}}{=}Q\iff\mathbb{E}[Z(x,a)]=Q(x,a)\quad\forall\;(x,a)\in\mathcal{X}\times\mathcal{A}.italic\_Z overblackboard\_E start\_ARG = end\_ARG italic\_Q ⇔ blackboard\_E [ italic\_Z ( italic\_x , italic\_a ) ] = italic\_Q ( italic\_x , italic\_a ) ∀ ( italic\_x , italic\_a ) ∈ caligraphic\_X × caligraphic\_A . | | We will say that analogue rules UDsubscript𝑈𝐷U\_{D}italic\_U start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT and UEsubscript𝑈𝐸U\_{E}italic\_U start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT are *expectation-equivalent* if, for all sequences (ωt)subscript𝜔𝑡(\omega\_{t})( italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), and for all Z0subscript𝑍0Z\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and Q0subscript𝑄0Q\_{0}italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, | | | | | --- | --- | --- | | | Z0=𝔼Q0⇔Zt=𝔼Qt∀t∈ℕ.iffsubscript𝑍0𝔼subscript𝑄0subscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡 ℕZ\_{0}\overset{\mathbb{E}}{=}Q\_{0}\iff Z\_{t}\overset{\mathbb{E}}{=}Q\_{t}\quad\forall t\in\mathbb{N}.italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ⇔ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N . | | Our coupling-inspired methodology will allow us to rule out a number of common hypotheses regarding the good performance of distributional reinforcement learning: Distributional RL reduces variance. By our coupling argument, any expectation-equivalent rules UDsubscript𝑈𝐷U\_{D}italic\_U start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT and UEsubscript𝑈𝐸U\_{E}italic\_U start\_POSTSUBSCRIPT italic\_E end\_POSTSUBSCRIPT produce exactly the same sequence of expected values, *along each sample trajectory*. The distributions of expectations relative to the random draws ω1,ω2,…subscript𝜔1subscript𝜔2…\omega\_{1},\omega\_{2},\dotsitalic\_ω start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_ω start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , … are identical and therefore, Var[𝔼Zt(x,a)]=Var[Qt(x,a)]Vardelimited-[]𝔼subscript𝑍𝑡𝑥𝑎Vardelimited-[]subscript𝑄𝑡𝑥𝑎\text{Var}[\mathbb{E}Z\_{t}(x,a)]=\text{Var}[Q\_{t}(x,a)]Var [ blackboard\_E italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ] = Var [ italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ] everywhere, and UDsubscript𝑈𝐷U\_{D}italic\_U start\_POSTSUBSCRIPT italic\_D end\_POSTSUBSCRIPT does not produce lower variance estimates. Distributional RL helps with policy iteration. One may imagine that distributional RL helps identify the best action. But if Zt=𝔼Qtsubscript𝑍𝑡𝔼subscript𝑄𝑡Z\_{t}\overset{\mathbb{E}}{=}Q\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT everywhere, then also greedy policies based on argmax⁡Qt(x,⋅)argmaxsubscript𝑄𝑡𝑥⋅\operatorname\*{arg\,max}Q\_{t}(x,\cdot)start\_OPERATOR roman\_arg roman\_max end\_OPERATOR italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , ⋅ ) and argmax⁡𝔼Zt(x,⋅)argmax𝔼subscript𝑍𝑡𝑥⋅\operatorname\*{arg\,max}\mathbb{E}Z\_{t}(x,\cdot)start\_OPERATOR roman\_arg roman\_max end\_OPERATOR blackboard\_E italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , ⋅ ) agree. Hence our results (presented in the context of policy evaluation) extend to the setting in which actions are selected on the basis of their expectation (e.g. ϵitalic-ϵ\epsilonitalic\_ϵ-greedy, softmax). Distributional RL is more stable with function approximation. We will use the coupling methodology to provide evidence that, at least combined with linear function approximation, distributional update rules do not improve performance. 4 Analysis of Behavioural Differences -------------------------------------- Our coupling-inspired methodology provides us with a stable framework to perform a theoretical investigation of the behavioural differences between distributional and expected RL. We use it through a progression of settings that will gradually increase in complexity to shed light on what causes distributional algorithms to behave differently from expected algorithms. We consider three axes of complexity: 1) how we approximate the state space, 2) how we represent the distribution, and 3) how we perform updates on the predicted distribution function. ### 4.1 Tabular models We first consider tabular representations, which uniquely represent the predicted return distribution at each state-action pair. We start with the simplest class of updates, that of the expected and distributional Bellman operators. Here and below we will write 𝒵𝒵\mathcal{Z}caligraphic\_Z for the space of bounded return distribution functions and 𝒬𝒬\mathcal{Q}caligraphic\_Q for the space of bounded value functions. We begin with results regarding two model-based update rules. ###### Proposition 2. Let Z0∈𝒵subscript𝑍0𝒵Z\_{0}\in\mathcal{Z}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z and Q0∈𝒬subscript𝑄0𝒬Q\_{0}\in\mathcal{Q}italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q, and suppose that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. If | | | | | --- | --- | --- | | | Zt+1:=T𝒟πZtQt+1:=TπQt,formulae-sequenceassignsubscript𝑍𝑡1subscriptsuperscript𝑇𝜋𝒟subscript𝑍𝑡assignsubscript𝑄𝑡1superscript𝑇𝜋subscript𝑄𝑡Z\_{t+1}:=T^{\pi}\_{\mathcal{D}}Z\_{t}\quad Q\_{t+1}:=T^{\pi}Q\_{t},italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , | | then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. See the supplemental material for the proof of this result and those that follow. We next consider a categorical, tabular representation where the distribution at each state-action pair is stored explicitly but, as per Equation ([4](#S2.E4 "4 ‣ 2 Background ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning")), restricted to the finite support 𝐳={z1,…,zK}𝐳subscript𝑧1…subscript𝑧𝐾\mathbf{z}=\{z\_{1},\dots,z\_{K}\}bold\_z = { italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT }, (?). Unlike the tabular representation of Prop. [2](#Thmprop2a "Proposition 2. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning"), this algorithm has a practical implementation; however, after each Bellman update we must project back the result into the space of those finite-support distributions, giving rise to a projected operator. ###### Proposition 3. Suppose that the finite support brackets the set of attainable value distributions, in the sense that z1≤−Rmax1−γsubscript𝑧1subscript𝑅max1𝛾z\_{1}\leq-\frac{R\_{\textsc{max}}}{1-\gamma}italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ≤ - divide start\_ARG italic\_R start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT end\_ARG start\_ARG 1 - italic\_γ end\_ARG and zK≥Rmax1−γsubscript𝑧𝐾subscript𝑅max1𝛾z\_{K}\geq\frac{R\_{\textsc{max}}}{1-\gamma}italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ≥ divide start\_ARG italic\_R start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT end\_ARG start\_ARG 1 - italic\_γ end\_ARG. Define the projected distributional operator | | | | | --- | --- | --- | | | TCπ:=ΠCT𝒟π.assignsuperscriptsubscript𝑇𝐶𝜋subscriptΠ𝐶subscriptsuperscript𝑇𝜋𝒟T\_{C}^{\pi}:=\Pi\_{C}T^{\pi}\_{\mathcal{D}}.italic\_T start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT := roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT . | | Suppose Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, for Z0∈𝒵z,Q0∈𝒬formulae-sequencesubscript𝑍0subscript𝒵𝑧subscript𝑄0𝒬Z\_{0}\in\mathcal{Z}\_{z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q. If | | | | | --- | --- | --- | | | Zt+1:=TCπZtQt+1:=TπQt,formulae-sequenceassignsubscript𝑍𝑡1subscriptsuperscript𝑇𝜋𝐶subscript𝑍𝑡assignsubscript𝑄𝑡1superscript𝑇𝜋subscript𝑄𝑡Z\_{t+1}:=T^{\pi}\_{C}Z\_{t}\quad Q\_{t+1}:=T^{\pi}Q\_{t},italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , | | then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. Next, we consider sample-based update rules, still in the tabular setting. These roughly correspond to the Categorical SARSA algorithm whose convergence was established by (?), with or without the projection step. Here we highlight the expectation-equivalence of this algorithm to the classic SARSA algorithm (?). For these results we will need some additional notation. Consider a sample transition (xt,at,rt,xt+1,at+1)subscript𝑥𝑡subscript𝑎𝑡subscript𝑟𝑡subscript𝑥𝑡1subscript𝑎𝑡1(x\_{t},a\_{t},r\_{t},x\_{t+1},a\_{t+1})( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ). Given a random variable Y𝑌Yitalic\_Y, denote its probability distribution by PYsubscript𝑃𝑌P\_{Y}italic\_P start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT and its cumulative distribution function by FYsubscript𝐹𝑌F\_{Y}italic\_F start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT, respectively. With some abuse of notation we extend this to value distributions and write PZ(x,a)subscript𝑃𝑍𝑥𝑎P\_{Z}(x,a)italic\_P start\_POSTSUBSCRIPT italic\_Z end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) and FZ(x,a)subscript𝐹𝑍𝑥𝑎F\_{Z}(x,a)italic\_F start\_POSTSUBSCRIPT italic\_Z end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) for the probability distribution and cumulative distribution function, respectively, corresponding to Z(x,a)𝑍𝑥𝑎Z(x,a)italic\_Z ( italic\_x , italic\_a ). Finally, let Zt′(xt,at)superscriptsubscript𝑍𝑡′subscript𝑥𝑡subscript𝑎𝑡Z\_{t}^{\prime}(x\_{t},a\_{t})italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) be a random variable distributed like the target rt+γZt(xt+1,at+1)subscript𝑟𝑡𝛾subscript𝑍𝑡subscript𝑥𝑡1subscript𝑎𝑡1r\_{t}+\gamma Z\_{t}(x\_{t+1},a\_{t+1})italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ), and write ΠCZt′(x,a)subscriptΠ𝐶superscriptsubscript𝑍𝑡′𝑥𝑎\Pi\_{C}Z\_{t}^{\prime}(x,a)roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ) for its Cramér projection onto the support 𝐳𝐳\mathbf{z}bold\_z. ###### Proposition 4. Suppose that Z0∈𝒵,Q0∈𝒬formulae-sequencesubscript𝑍0𝒵subscript𝑄0𝒬Z\_{0}\in\mathcal{Z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q and Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Given a sample transition (xt,at,rt,xt+1,at+1)subscript𝑥𝑡subscript𝑎𝑡subscript𝑟𝑡subscript𝑥𝑡1subscript𝑎𝑡1(x\_{t},a\_{t},r\_{t},x\_{t+1},a\_{t+1})( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) consider the mixture update | | | | | --- | --- | --- | | | PZt+1(x,a):={(1−αt)PZt(x,a)+αtPZt′(xt,at)PZt(x,a)if x,a≠xt,atassignsubscript𝑃subscript𝑍𝑡1𝑥𝑎cases1subscript𝛼𝑡subscript𝑃subscript𝑍𝑡𝑥𝑎subscript𝛼𝑡subscript𝑃superscriptsubscript𝑍𝑡′subscript𝑥𝑡subscript𝑎𝑡missing-subexpressionsubscript𝑃subscript𝑍𝑡𝑥𝑎formulae-sequenceif 𝑥𝑎 subscript𝑥𝑡subscript𝑎𝑡P\_{Z\_{t+1}}(x,a):=\left\{\begin{array}[]{ll}(1-\alpha\_{t})P\_{Z\_{t}}(x,a)+\alpha\_{t}P\_{Z\_{t}^{\prime}}(x\_{t},a\_{t})&\\ P\_{Z\_{t}}(x,a)&\text{if }x,a\neq x\_{t},a\_{t}\end{array}\right.italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) := { start\_ARRAY start\_ROW start\_CELL ( 1 - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) end\_CELL start\_CELL if italic\_x , italic\_a ≠ italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARRAY | | and the SARSA update | | | | | --- | --- | --- | | | Qt+1(xt,at):={Qt(xt,at)+αtδtQt(x,a)if x,a≠xt,atassignsubscript𝑄𝑡1subscript𝑥𝑡subscript𝑎𝑡casessubscript𝑄𝑡subscript𝑥𝑡subscript𝑎𝑡subscript𝛼𝑡subscript𝛿𝑡missing-subexpressionsubscript𝑄𝑡𝑥𝑎formulae-sequenceif 𝑥𝑎 subscript𝑥𝑡subscript𝑎𝑡Q\_{t+1}(x\_{t},a\_{t}):=\left\{\begin{array}[]{ll}Q\_{t}(x\_{t},a\_{t})+\alpha\_{t}\delta\_{t}&\\ Q\_{t}(x,a)&\text{if }x,a\neq x\_{t},a\_{t}\end{array}\right.italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) := { start\_ARRAY start\_ROW start\_CELL italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) end\_CELL start\_CELL if italic\_x , italic\_a ≠ italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARRAY | | where δt:=(rt+γQt(xt+1,at+1)−Qt(xt,at))assignsubscript𝛿𝑡subscript𝑟𝑡𝛾subscript𝑄𝑡subscript𝑥𝑡1subscript𝑎𝑡1subscript𝑄𝑡subscript𝑥𝑡subscript𝑎𝑡\delta\_{t}:=(r\_{t}+\gamma Q\_{t}(x\_{t+1},a\_{t+1})-Q\_{t}(x\_{t},a\_{t}))italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) - italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ), then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. ###### Proposition 5. Suppose that Z0∈𝒵z,Q0∈𝒬formulae-sequencesubscript𝑍0subscript𝒵𝑧subscript𝑄0𝒬Z\_{0}\in\mathcal{Z}\_{z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q, Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, that 𝐳𝐳\mathbf{z}bold\_z brackets the set of attainable value distributions, and PZt′subscript𝑃superscriptsubscript𝑍𝑡normal-′P\_{Z\_{t}^{\prime}}italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT in Prop. [4](#Thmprop4a "Proposition 4. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") is replaced by the projected target PΠCZt′subscript𝑃subscriptnormal-Π𝐶superscriptsubscript𝑍𝑡normal-′P\_{\Pi\_{C}Z\_{t}^{\prime}}italic\_P start\_POSTSUBSCRIPT roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT. Then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. Together, the propositions above show that there is no benefit, at least in terms of modelling expectations, to using distributional RL in a tabular setting when considering the distributional analogues of update rules in common usage in reinforcement learning. Next we turn our attention to a slightly more complex case, where distributional updates correspond to a semi-gradient update. In the expected setting, the mixture update of Prop. [4](#Thmprop4a "Proposition 4. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") corresponds to taking the semi-gradient of the squared loss of the temporal difference error δtsubscript𝛿𝑡\delta\_{t}italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. While there is no simple notion of semi-gradient when Z𝑍Zitalic\_Z is allowed to represent arbitrary distributions in 𝒵𝒵\mathcal{Z}caligraphic\_Z, there is when we consider a categorical representation, which is a finite object when 𝒳𝒳\mathcal{X}caligraphic\_X and 𝒜𝒜\mathcal{A}caligraphic\_A are finite (specifically, Z𝑍Zitalic\_Z can be represented by |𝒳||𝒜|K𝒳𝒜𝐾|\mathcal{X}||\mathcal{A}|K| caligraphic\_X | | caligraphic\_A | italic\_K real values). To keep the exposition simple, in what follows we ignore the fact that semi-gradient updates may yield an object which is not a probability distribution proper. In particular, the arguments remain unchanged if we allow the learner to output a signed distribution, as argued by (?). ###### Definition 3 (Gradient of Cramér Distance). Let Z,Z′∈𝒵z𝑍superscript𝑍normal-′ subscript𝒵𝑧Z,Z^{\prime}\in\mathcal{Z}\_{z}italic\_Z , italic\_Z start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT be two categorical distributions supported on 𝐳𝐳\mathbf{z}bold\_z. We define the gradient of the squared Cramér distance with respect to the CDF of Z𝑍Zitalic\_Z, denoted ∇Fℓ22(Z,Z′)∈ℝKsubscriptnormal-∇𝐹superscriptsubscriptnormal-ℓ22𝑍superscript𝑍normal-′superscriptℝ𝐾\nabla\_{F}\ell\_{2}^{2}(Z,Z^{\prime})\in\mathbb{R}^{K}∇ start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z , italic\_Z start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT as follows: | | | | | --- | --- | --- | | | ∇Fℓ22(Z,Z′)[i]:=∂∂F(zi)ℓ22(Z,Z′).assignsubscript∇𝐹superscriptsubscriptℓ22𝑍superscript𝑍′delimited-[]𝑖𝐹subscript𝑧𝑖superscriptsubscriptℓ22𝑍superscript𝑍′\nabla\_{F}\ell\_{2}^{2}(Z,Z^{\prime})[i]:=\frac{\partial}{\partial F(z\_{i})}\ell\_{2}^{2}(Z,Z^{\prime}).∇ start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z , italic\_Z start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) [ italic\_i ] := divide start\_ARG ∂ end\_ARG start\_ARG ∂ italic\_F ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z , italic\_Z start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) . | | Similarly, | | | | | --- | --- | --- | | | ∇Pℓ22(Z,Z′)[i]:=∂∂P(zi)ℓ22(Z,Z′).assignsubscript∇𝑃superscriptsubscriptℓ22𝑍superscript𝑍′delimited-[]𝑖𝑃subscript𝑧𝑖superscriptsubscriptℓ22𝑍superscript𝑍′\nabla\_{P}\ell\_{2}^{2}(Z,Z^{\prime})[i]:=\frac{\partial}{\partial P(z\_{i})}\ell\_{2}^{2}(Z,Z^{\prime}).∇ start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z , italic\_Z start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) [ italic\_i ] := divide start\_ARG ∂ end\_ARG start\_ARG ∂ italic\_P ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z , italic\_Z start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) . | | We say that the categorical support 𝐳𝐳\mathbf{z}bold\_z is *c𝑐citalic\_c-spaced* if zi+1−zi=csubscript𝑧𝑖1subscript𝑧𝑖𝑐z\_{i+1}-z\_{i}=citalic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = italic\_c for all i𝑖iitalic\_i (recall that zi+1≥zisubscript𝑧𝑖1subscript𝑧𝑖z\_{i+1}\geq z\_{i}italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT ≥ italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT). ###### Proposition 6. Suppose that the categorical support 𝐳𝐳\mathbf{z}bold\_z is c𝑐citalic\_c-spaced. Let Z0∈𝒵,Q0∈𝒬formulae-sequencesubscript𝑍0𝒵subscript𝑄0𝒬Z\_{0}\in\mathcal{Z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q be such that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Suppose that Qt+1subscript𝑄𝑡1Q\_{t+1}italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is updated according to the SARSA update with step-size αtsubscript𝛼𝑡\alpha\_{t}italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Let Zt′superscriptsubscript𝑍𝑡normal-′Z\_{t}^{\prime}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT be given by ΠC(rt+γZt(xt+1,at+1))subscriptnormal-Π𝐶subscript𝑟𝑡𝛾subscript𝑍𝑡subscript𝑥𝑡1subscript𝑎𝑡1\Pi\_{C}(r\_{t}+\gamma Z\_{t}(x\_{t+1},a\_{t+1}))roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ). Consider the CDF gradient update rule | | | | | --- | --- | --- | | | FZt+1(x,a):={FZt(x,a)+αt′∇Fℓ22(Zt(xt,at),Zt′(xt,at))FZt(x,a)if x,a≠xt,at.assignsubscript𝐹subscript𝑍𝑡1𝑥𝑎casessubscript𝐹subscript𝑍𝑡𝑥𝑎superscriptsubscript𝛼𝑡′subscript∇𝐹superscriptsubscriptℓ22subscript𝑍𝑡subscript𝑥𝑡subscript𝑎𝑡superscriptsubscript𝑍𝑡′subscript𝑥𝑡subscript𝑎𝑡missing-subexpressionsubscript𝐹subscript𝑍𝑡𝑥𝑎formulae-sequenceif 𝑥𝑎 subscript𝑥𝑡subscript𝑎𝑡\displaystyle F\_{Z\_{t+1}}(x,a):=\left\{\begin{array}[]{ll}F\_{Z\_{t}}(x,a)+\alpha\_{t}^{\prime}\nabla\_{F}\ell\_{2}^{2}(Z\_{t}(x\_{t},a\_{t}),Z\_{t}^{\prime}(x\_{t},a\_{t}))&\\ F\_{Z\_{t}}(x,a)&\text{if }x,a\neq x\_{t},a\_{t}.\end{array}\right.italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) := { start\_ARRAY start\_ROW start\_CELL italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) end\_CELL start\_CELL if italic\_x , italic\_a ≠ italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT . end\_CELL end\_ROW end\_ARRAY | | If αt′=αt2csuperscriptsubscript𝛼𝑡normal-′subscript𝛼𝑡2𝑐\alpha\_{t}^{\prime}=\tfrac{\alpha\_{t}}{2c}italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = divide start\_ARG italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_ARG start\_ARG 2 italic\_c end\_ARG, then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. Prop. [6](#Thmprop6a "Proposition 6. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") shows that there exists a semi-gradient update which is expectation-equivalent to the SARSA update, with only a change of step-size required. While this is not too surprising ((?) remarked on the relationship between the mixture update of Prop. [4](#Thmprop4a "Proposition 4. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") and the CDF semi-gradient), the result does highlight that the equivalence continues to hold even in gradient-based settings. The resemblance stops here, however, and we now come to our first negative result. ###### Proposition 7. Suppose the CDF gradient in update rule of Prop. [6](#Thmprop6a "Proposition 6. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") is replaced by the PDF gradient ∇Pℓ22(Zt,Zt′)subscriptnormal-∇𝑃superscriptsubscriptnormal-ℓ22subscript𝑍𝑡superscriptsubscript𝑍𝑡normal-′\nabla\_{P}\ell\_{2}^{2}(Z\_{t},Z\_{t}^{\prime})∇ start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ). Then for each choice of step-size α′superscript𝛼normal-′\alpha^{\prime}italic\_α start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT there exists an MDP M𝑀Mitalic\_M and a time step t∈ℕ𝑡ℕt\in\mathbb{N}italic\_t ∈ blackboard\_N for which Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT but Zt≠𝔼Qtsubscript𝑍𝑡𝔼subscript𝑄𝑡Z\_{t}\overset{\mathbb{E}}{\neq}Q\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG ≠ end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. The counterexample used in the proof of Prop. [7](#Thmprop7a "Proposition 7. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") illustrates what happens when the gradient is taken w.r.t. the probability mass: some atoms of the distribution may be assigned negative probabilities. Including a projection step does not rectify the issue, as the expectation of Ztsubscript𝑍𝑡Z\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT remains different from Qtsubscript𝑄𝑡Q\_{t}italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. ### 4.2 Linear Function Approximation In the linear approximation setting, we represent each state-action pair x,a𝑥𝑎x,aitalic\_x , italic\_a as a feature vector ϕx,a∈ℝdsubscriptitalic-ϕ𝑥𝑎superscriptℝ𝑑\phi\_{x,a}\in\mathbb{R}^{d}italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT, We wish to find a linear function given by a weight vector θ𝜃\thetaitalic\_θ such that | | | | | --- | --- | --- | | | θTϕx,a≈Qπ(x,a).superscript𝜃𝑇subscriptitalic-ϕ𝑥𝑎superscript𝑄𝜋𝑥𝑎\theta^{T}\phi\_{x,a}\approx Q^{\pi}(x,a).italic\_θ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT ≈ italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ) . | | In the categorical distributional setting, θ𝜃\thetaitalic\_θ becomes a matrix W∈ℝK×d𝑊superscriptℝ𝐾𝑑W\in\mathbb{R}^{K\times d}italic\_W ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_K × italic\_d end\_POSTSUPERSCRIPT. Here we will consider approximating the cumulative distribution function: | | | | | --- | --- | --- | | | FZ(x,a)(zi)=Wϕx,a[i].subscript𝐹𝑍𝑥𝑎subscript𝑧𝑖𝑊subscriptitalic-ϕ𝑥𝑎delimited-[]𝑖F\_{Z(x,a)}(z\_{i})=W\phi\_{x,a}[i].italic\_F start\_POSTSUBSCRIPT italic\_Z ( italic\_x , italic\_a ) end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_W italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT [ italic\_i ] . | | We can extend this parametric form by viewing F𝐹Fitalic\_F as describing the CDF of a mixture of Diracs (Equation [4](#S2.E4 "4 ‣ 2 Background ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning")). Thus, F(z)=0𝐹𝑧0F(z)=0italic\_F ( italic\_z ) = 0 for z<z1𝑧subscript𝑧1z<z\_{1}italic\_z < italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, and similarly F(z)=F(zk)𝐹𝑧𝐹subscript𝑧𝑘F(z)=F(z\_{k})italic\_F ( italic\_z ) = italic\_F ( italic\_z start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ) for z≥zk𝑧subscript𝑧𝑘z\geq z\_{k}italic\_z ≥ italic\_z start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT; see (?) for a justification. In what follows we further assume that the support 𝐳𝐳\mathbf{z}bold\_z is 1111-spaced. In this setting, there may be no W𝑊Witalic\_W for which FZ(x,a)subscript𝐹𝑍𝑥𝑎F\_{Z}(x,a)italic\_F start\_POSTSUBSCRIPT italic\_Z end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) describes a proper cumulative distribution function: e.g. F(y)𝐹𝑦F(y)italic\_F ( italic\_y ) may be less than or greater than 1 for y>zk𝑦subscript𝑧𝑘y>z\_{k}italic\_y > italic\_z start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT. Yet, as shown by (?), we can still analyze the behaviour of a distributional algorithm which is allowed to output improper distributions. In our analysis we will assume that all predicted distributions sum to 1, though they may assign negative mass to some outcomes. We write 𝒬ϕ:={θ⊤ϕ:θ∈ℝd}assignsubscript𝒬italic-ϕconditional-setsuperscript𝜃topitalic-ϕ𝜃superscriptℝ𝑑\mathcal{Q}\_{\phi}:=\{\theta^{\top}\phi:\theta\in\mathbb{R}^{d}\}caligraphic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT := { italic\_θ start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT italic\_ϕ : italic\_θ ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_d end\_POSTSUPERSCRIPT } for the set of value functions that can be represented by a linear approximation over ϕitalic-ϕ\phiitalic\_ϕ. Similarly, 𝒵ϕ:={Wϕ:W∈ℝK×d}assignsubscript𝒵italic-ϕconditional-set𝑊italic-ϕ𝑊superscriptℝ𝐾𝑑\mathcal{Z}\_{\phi}:=\{W\phi:W\in\mathbb{R}^{K\times d}\}caligraphic\_Z start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT := { italic\_W italic\_ϕ : italic\_W ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_K × italic\_d end\_POSTSUPERSCRIPT } is the set of CDFs that are linearly representable. For Zt∈𝒵ϕsubscript𝑍𝑡subscript𝒵italic-ϕZ\_{t}\in\mathcal{Z}\_{\phi}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT, let Wtsubscript𝑊𝑡W\_{t}italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT be the corresponding weight matrix. As before, we define Zt′(xt,at)superscriptsubscript𝑍𝑡′subscript𝑥𝑡subscript𝑎𝑡Z\_{t}^{\prime}(x\_{t},a\_{t})italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) to be the random variable corresponding to the projected target ΠC[rt+γZt(xt+1,at+1)]subscriptΠ𝐶delimited-[]subscript𝑟𝑡𝛾subscript𝑍𝑡subscript𝑥𝑡1subscript𝑎𝑡1\Pi\_{C}[r\_{t}+\gamma Z\_{t}(x\_{t+1},a\_{t+1})]roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT [ italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ]. ###### Proposition 8. Let Z0∈𝒵ϕsubscript𝑍0subscript𝒵italic-ϕZ\_{0}\in\mathcal{Z}\_{\phi}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT, Q0∈𝒬ϕsubscript𝑄0subscript𝒬italic-ϕQ\_{0}\in\mathcal{Q}\_{\phi}italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT, and 𝐳∈ℝK𝐳superscriptℝ𝐾\mathbf{z}\in\mathbb{R}^{K}bold\_z ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_K end\_POSTSUPERSCRIPT such that z is 1-spaced. Suppose that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, and that Z0(x,a)[zK]=1subscript𝑍0𝑥𝑎delimited-[]subscript𝑧𝐾1Z\_{0}(x,a)[z\_{K}]=1italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) [ italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ] = 1 ∀x,afor-all𝑥𝑎\forall x,a∀ italic\_x , italic\_a. Let Wt,θtsubscript𝑊𝑡subscript𝜃𝑡W\_{t},\theta\_{t}italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT respectively denote the weights corresponding to Ztsubscript𝑍𝑡Z\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and Qtsubscript𝑄𝑡Q\_{t}italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. If Zt+1subscript𝑍𝑡1Z\_{t+1}italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is computed from the semi-gradient update rule | | | | | --- | --- | --- | | | Wt+1:=Wt+αt(FZt′(xt,at)−Wtϕxt,at)ϕxt,atTassignsubscript𝑊𝑡1subscript𝑊𝑡subscript𝛼𝑡subscript𝐹superscriptsubscript𝑍𝑡′subscript𝑥𝑡subscript𝑎𝑡subscript𝑊𝑡subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇W\_{t+1}:=W\_{t}+\alpha\_{t}(F\_{Z\_{t}^{\prime}}(x\_{t},a\_{t})-W\_{t}\phi\_{x\_{t},a\_{t}})\phi\_{x\_{t},a\_{t}}^{T}italic\_W start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) - italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT | | and Qt+1subscript𝑄𝑡1Q\_{t+1}italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is computed according to Equation [2](#S2.E2 "2 ‣ 2 Background ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") with the same step-size αtsubscript𝛼𝑡\alpha\_{t}italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. Importantly, the gradient in the previous proposition is taken with respect to the CDF of the distribution. Taking the gradient with respect to the Probability Mass Function (PMF) does not preserve the expected value of the predictions, which we have already shown in the tabular setting. This negative result is consistent with the results on signed distributions by (?). ![Refer to caption](/html/1901.11084/assets/cartpole_fourier_all_with_inset.png) ![Refer to caption](/html/1901.11084/assets/acrobot_fourier_all.png) Figure 1: Cartpole (left) and Acrobot (right) with Fourier features of order 4. Step size in parentheses. Algorithms correspond to the lite versions described in text. ![Refer to caption](/html/1901.11084/assets/cartpole_fourier_basis_1.png) ![Refer to caption](/html/1901.11084/assets/cartpole_fourier_basis_2.png) ![Refer to caption](/html/1901.11084/assets/cartpole_fourier_basis_3_with_inset.png) Figure 2: Varying the basis orders on CartPole. Orders 1 (left), 2 (center), 3 (right). Step sizes same as in Fig.[1](#S4.F1 "Figure 1 ‣ 4.2 Linear Function Approximation ‣ 4 Analysis of Behavioural Differences ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning"). ### 4.3 Non-linear Function Approximation To conclude this theoretical analysis, we consider more general function approximation schemes, which we will refer to as the non-linear setting. In the non-linear setting, we consider a differentiable function ψ(W,⋅)𝜓𝑊⋅\psi(W,\cdot)italic\_ψ ( italic\_W , ⋅ ). For example, the function ψ(W,ϕx,a)𝜓𝑊subscriptitalic-ϕ𝑥𝑎\psi(W,\phi\_{x,a})italic\_ψ ( italic\_W , italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT ) could be a probability mass function given by a softmax distribution over the logits Wϕx,a𝑊subscriptitalic-ϕ𝑥𝑎W\phi\_{x,a}italic\_W italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT, as is the case for the final layer of the neural network in the C51 algorithm. ###### Proposition 9. There exists a (nonlinear) representation of the cumulative distribution function parametrized by W∈ℝK×d𝑊superscriptℝ𝐾𝑑W\in\mathbb{R}^{K\times d}italic\_W ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_K × italic\_d end\_POSTSUPERSCRIPT such that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT but after applying the semi-gradient update rule | | | | | --- | --- | --- | | | Wt+1:=Wt+αt∇Wℓ22(ψ(W,ϕ(xt,at)),FZt′),assignsubscript𝑊𝑡1subscript𝑊𝑡subscript𝛼𝑡subscript∇𝑊superscriptsubscriptℓ22𝜓𝑊italic-ϕsubscript𝑥𝑡subscript𝑎𝑡subscript𝐹superscriptsubscript𝑍𝑡′W\_{t+1}:=W\_{t}+\alpha\_{t}\nabla\_{W}\ell\_{2}^{2}(\psi(W,\phi(x\_{t},a\_{t})),F\_{Z\_{t}^{\prime}}),italic\_W start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∇ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_ψ ( italic\_W , italic\_ϕ ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) , italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) , | | where FZt′subscript𝐹superscriptsubscript𝑍𝑡normal-′F\_{Z\_{t}^{\prime}}italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT is the cumulative distribution function of the projected Bellman target, we have Z1≠𝔼Q1subscript𝑍1𝔼subscript𝑄1Z\_{1}\overset{\mathbb{E}}{\neq}Q\_{1}italic\_Z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG ≠ end\_ARG italic\_Q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. The key difference with Prop. [8](#Thmprop8a "Proposition 8. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") is the change from the gradient of a linear function (the feature vector ϕxt,atsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡\phi\_{x\_{t},a\_{t}}italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT) to the gradient of a nonlinear function; hence the result is not as trivial as it might look. Still, while the result is not altogether surprising, combined with our results in the linear case it shows that the interplay with nonlinear function approximation is a viable candidate for explaining the benefits of the distributional approach. In the next section we will present empirical evidence to this effect. 5 Empirical Analysis --------------------- Our theoretical results demonstrating that distributional RL often performs identically to expected RL contrast with the empirical results of (?; ?; ?; ?; ?), to name a few. In this section we confirm our theoretical findings by providing empirical evidence that distributional reinforcement learning does not improve performance when combined with tabular representations or linear function approximation. Then, we find evidence of improved performance when combined with deep networks, suggesting that the answer lies, as suggested by Prop. [9](#Thmprop9a "Proposition 9. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning"), in distributional reinforcement learning’s interaction with nonlinear function approximation. ### 5.1 Tabular models Though theoretical results indicate that performing gradient updates with respect to the distribution’s CDF should produce different predicted distributions from gradient updates with respect to its PMF, it is not immediately clear how these differences affect performance. To explore this, we considered a 12x12 gridworld environment and ran two distributional versions of Q-learning, one which performed gradient updates with respect to the CDF and one which performed updates with respect to the PMF of the predicted distribution, alongside traditional Q-learning. We found that, as predicted by Proposition 4, when given the same random seed the CDF update method had identical performance to traditional Q-learning. The PMF update method, though not significantly worse in performance, did exhibit different performance, performing better on some random seeds and worse on others than expected Q-learning. We observed a similar phenomenon with a simple 3-state chain MDP. Results for both experiments are omitted for brevity, but are included in the supplemental material. ![Refer to caption](/html/1901.11084/assets/cartpole_deep.png) ![Refer to caption](/html/1901.11084/assets/acrobot_deep.png) Figure 3: Cartpole with Adam optimizer (left) and Acrobot (right) with deep networks. Learning rate parameter in parentheses. ### 5.2 Linear Function Approximation We next investigate whether the improved performance of distributional RL is due to the outputs and loss functions used, and whether this can be observed even with linear function approximation. We make use of three variants of established algorithms for our investigation, modified to use linear function approximators rather than deep networks. We include in this analysis an algorithm that computes a softmax over logits that are a linear function of a state feature vector. Although we did not include this type of approximator in the linear setting of our theoretical analysis, we include it here as it provides an analogue of C51 against which we can compare the other algorithms. 1. 1. DQN-lite, based on (?), predicts Q(x,a)𝑄𝑥𝑎Q(x,a)italic\_Q ( italic\_x , italic\_a ), the loss is the squared difference between the target and the prediction. 2. 2. C51-lite, based on (?), outputs Z(x,a)𝑍𝑥𝑎Z(x,a)italic\_Z ( italic\_x , italic\_a ), a softmax distribution whose logits are linear in ϕx,asubscriptitalic-ϕ 𝑥𝑎\phi\_{x,a}italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT. It minimizes the cross-entropy loss between the target and the prediction. 3. 3. S51-lite, based on (?), outputs Z(x,a)𝑍𝑥𝑎Z(x,a)italic\_Z ( italic\_x , italic\_a ) as a categorical distribution whose probabilities are a linear function of ϕx,asubscriptitalic-ϕ 𝑥𝑎\phi\_{x,a}italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT and minimizes the squared Cramér distance. We further break S51-lite down into two sub-methods. One of these performs updates by taking the gradient of the Cramér distance with respect to the points of the PMF of the prediction, while the other takes the gradient with respect to the CDF. For a fair comparison, all algorithms used a stochastic gradient descent optimizer except where noted. We used the same hyperparameters for all algorithms, except for step sizes, where we chose the step size that gave the best performance for each algorithm. We otherwise use the usual agent infrastructure from DQN, including a replay memory of capacity 50,000 and a target network which is updated after every 10 training steps. We update the agent by sampling batches of 128 transitions from the replay memory. We ran these algorithms on two classic control environments: CartPole and Acrobot. In CartPole, the objective is to keep a pole balanced on a cart by moving the cart left and right. In Acrobot, the objective is to swing a double-linked pendulum above a threshold by applying torque to one of its joints. We encode each original state x𝑥xitalic\_x, (x∈ℝ4𝑥superscriptℝ4x\in\mathbb{R}^{4}italic\_x ∈ blackboard\_R start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT for CartPole and x∈ℝ6𝑥superscriptℝ6x\in\mathbb{R}^{6}italic\_x ∈ blackboard\_R start\_POSTSUPERSCRIPT 6 end\_POSTSUPERSCRIPT for Acrobot) as a feature vector ϕ(x)italic-ϕ𝑥\phi(x)italic\_ϕ ( italic\_x ) given by the Fourier basis for some fixed order (?). For completeness, on Cartpole the basis of order 1 yields 15 features, order 2: 80 features, 3: 255, and finally 4: 624 features. We first compare how DQN-, C51-, and S51-lite perform on the two tasks in Figure [1](#S4.F1 "Figure 1 ‣ 4.2 Linear Function Approximation ‣ 4 Analysis of Behavioural Differences ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") with the order 4 basis, which is more than sufficient to well-approximate the value function. We observe that DQN learns more quickly than C51 and S51 with CDF, while S51 with PMF underperforms significantly. On Acrobot, the difference is even more pronounced. This result at first glance seems to contradict the theoretical result we showed indicating that S51-lite should perform similarly to DQN in the linear function approximation case, but we attribute this difference in performance to the fact that the initialization in the S51 algorithm doesn’t enforce the assumption that the predicted distributions sum to 1. We then investigate the effect of reducing the order in all algorithms in Figure [2](#S4.F2 "Figure 2 ‣ 4.2 Linear Function Approximation ‣ 4 Analysis of Behavioural Differences ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning"). We observe that the distributional algorithms performs poorly when there are too few features; by contrast, DQN-lite can perform both tasks with an order 2 basis. This indicates that the distributional methods suffered more when there were fewer informative features available than expectation-based methods did in this setting. ### 5.3 Nonlinear function approximation We repeat the above experiment, but now replace the Fourier basis features with a two-layer ReLU neural network that is trained along with the final layer (which remains linear). In the CartPole task we found that DQN often diverged with the gradient descent optimizer, so we used Adam for all the algorithms, and chose the learning rate parameter that gave the best performance for each. Results are displayed in Figure [3](#S5.F3 "Figure 3 ‣ 5.1 Tabular models ‣ 5 Empirical Analysis ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning"). We can observe that C51 generally outperforms DQN, although they both eventually converge to the same value. It is interesting to notice that S51 has a harder time achieving the same performance, and comes nowhere near in the harder Acrobot task. This suggests that despite being theoretically unnecessary, the softmax in C51 is working to its advantage. This finding is consistent with the empirical results observed in Atari games by (?). The results of the previous two sets of experiments indicate that the benefits of distributional reinforcement learning are primarily gained from improved learning in the earlier layers of deep neural networks, as well as in the nonlinear softmax used in C51. We believe further investigations in this direction should lead to a deeper understanding of the distributional approach. 6 Discussion and Future Work ----------------------------- In this paper we have provided theoretical and empirical results that give evidence on the benefits (or, some cases, lack thereof) of the distributional approach in reinforcement learning. Together, our results point to function approximation as the key driver in the difference in behaviour between distributional and expected algorithms. To summarize, our findings are: 1. 1. Distributional methods are generally expectation-equivalent when using tabular representations or linear function approximation, but 2. 2. diverge from expected methods when we use non-linear function approximation. 3. 3. Empirically, we provide fresh confirmation that modelling a distribution helps when using non-linear approximation. There are a few notions from distributional reinforcement learning not covered by our study here, including the effect of using Wasserstein projections of the kind implied by quantile regression (?), and the impact of the softmax transfer function used by C51 on learning. In particular the regression setting, (?) show that even for a fixed set of features, optimizing a distributional loss results in better accuracy than minimizing the squared error of predictions. Yet we believe the most important question raised by our work is: what happens in deep neural networks that benefits most from the distributional perspective? Returning to the proposed reasons for the distributional perspective’s success in the introduction, we note that the potentially regularizing effect of modeling a distribution, and a potential role as an ‘auxiliary task’ played by the distribution are both avenues that remain largely unaddressed by this work. 7 Proofs of main results ------------------------- ###### Proposition 1. Let 𝐳∈ℝk𝐳superscriptℝ𝑘\mathbf{z}\in\mathbb{R}^{k}bold\_z ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_k end\_POSTSUPERSCRIPT, and P𝑃Pitalic\_P be a mixture of Dirac distributions (see Eq. [4](#S2.E4 "4 ‣ 2 Background ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning")) whose support is contained in the interval [z1,zK]subscript𝑧1subscript𝑧𝐾[z\_{1},z\_{K}][ italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ]. Then the Cramér projection ΠC(P)subscriptnormal-Π𝐶𝑃\Pi\_{C}(P)roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_P ) onto 𝐳𝐳\mathbf{z}bold\_z is such that | | | | | --- | --- | --- | | | 𝔼[ΠC(P)]=𝔼[P]𝔼delimited-[]subscriptΠ𝐶𝑃𝔼delimited-[]𝑃\mathbb{E}[\Pi\_{C}(P)]=\mathbb{E}[P]blackboard\_E [ roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_P ) ] = blackboard\_E [ italic\_P ] | | . ###### Proof. We first prove the statement for a single dirac δysubscript𝛿𝑦\delta\_{y}italic\_δ start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT. We let zi,zi+1subscript𝑧𝑖subscript𝑧𝑖1z\_{i},z\_{i+1}italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT be such that zi≤y≤zi+1subscript𝑧𝑖𝑦subscript𝑧𝑖1z\_{i}\leq y\leq z\_{i+1}italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ≤ italic\_y ≤ italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT. | | | | | | --- | --- | --- | --- | | | 𝔼[ΠCδy]𝔼delimited-[]subscriptΠ𝐶subscript𝛿𝑦\displaystyle\mathbb{E}[\Pi\_{C}\delta\_{y}]blackboard\_E [ roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_y end\_POSTSUBSCRIPT ] | =𝔼[zi+1−yzi+1−ziδzi+y−zizi+1−ziδzi+1]absent𝔼delimited-[]subscript𝑧𝑖1𝑦subscript𝑧𝑖1subscript𝑧𝑖subscript𝛿subscript𝑧𝑖𝑦subscript𝑧𝑖subscript𝑧𝑖1subscript𝑧𝑖subscript𝛿subscript𝑧𝑖1\displaystyle=\mathbb{E}[\frac{z\_{i+1}-y}{z\_{i+1}-z\_{i}}\delta\_{z\_{i}}+\frac{y-z\_{i}}{z\_{i+1}-z\_{i}}\delta\_{z\_{i+1}}]= blackboard\_E [ divide start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_y end\_ARG start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT + divide start\_ARG italic\_y - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ] | | | | | =zi+1−yzi+1−zi𝔼[δzi]+y−zizi+1−zi𝔼[δzi+1]absentsubscript𝑧𝑖1𝑦subscript𝑧𝑖1subscript𝑧𝑖𝔼delimited-[]subscript𝛿subscript𝑧𝑖𝑦subscript𝑧𝑖subscript𝑧𝑖1subscript𝑧𝑖𝔼delimited-[]subscript𝛿subscript𝑧𝑖1\displaystyle=\frac{z\_{i+1}-y}{z\_{i+1}-z\_{i}}\mathbb{E}[\delta\_{z\_{i}}]+\frac{y-z\_{i}}{z\_{i+1}-z\_{i}}\mathbb{E}[\delta\_{z\_{i+1}}]= divide start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_y end\_ARG start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG blackboard\_E [ italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ] + divide start\_ARG italic\_y - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG blackboard\_E [ italic\_δ start\_POSTSUBSCRIPT italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ] | | | | | =1zi+1−zi(zi+1−zi)yabsent1subscript𝑧𝑖1subscript𝑧𝑖subscript𝑧𝑖1subscript𝑧𝑖𝑦\displaystyle=\frac{1}{z\_{i+1}-z\_{i}}(z\_{i+1}-z\_{i})y= divide start\_ARG 1 end\_ARG start\_ARG italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG ( italic\_z start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT - italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_y | | | | | =yabsent𝑦\displaystyle=y= italic\_y | | If the law of the random variable Z𝑍Zitalic\_Z is a mixture of diracs, i.e. P=∑i=1nαiδyi𝑃superscriptsubscript𝑖1𝑛subscript𝛼𝑖subscript𝛿subscript𝑦𝑖P=\sum\_{i=1}^{n}\alpha\_{i}\delta\_{y\_{i}}italic\_P = ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, then we have | | | | | | --- | --- | --- | --- | | | 𝔼[ΠCZ]𝔼delimited-[]subscriptΠ𝐶𝑍\displaystyle\mathbb{E}[\Pi\_{C}Z]blackboard\_E [ roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_Z ] | =𝔼[ΠC∑i=1nαiδyi]absent𝔼delimited-[]subscriptΠ𝐶superscriptsubscript𝑖1𝑛subscript𝛼𝑖subscript𝛿subscript𝑦𝑖\displaystyle=\mathbb{E}[\Pi\_{C}\sum\_{i=1}^{n}\alpha\_{i}\delta\_{y\_{i}}]= blackboard\_E [ roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ] | | | | | =∑i=1nαi𝔼[ΠCδyi]absentsuperscriptsubscript𝑖1𝑛subscript𝛼𝑖𝔼delimited-[]subscriptΠ𝐶subscript𝛿subscript𝑦𝑖\displaystyle=\sum\_{i=1}^{n}\alpha\_{i}\mathbb{E}[\Pi\_{C}\delta\_{y\_{i}}]= ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E [ roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ] | | | | | =∑i=1nαi𝔼[δyi]absentsuperscriptsubscript𝑖1𝑛subscript𝛼𝑖𝔼delimited-[]subscript𝛿subscript𝑦𝑖\displaystyle=\sum\_{i=1}^{n}\alpha\_{i}\mathbb{E}[\delta\_{y\_{i}}]= ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT blackboard\_E [ italic\_δ start\_POSTSUBSCRIPT italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ] | | | | | =𝔼[Z]absent𝔼delimited-[]𝑍\displaystyle=\mathbb{E}[Z]= blackboard\_E [ italic\_Z ] | | ∎ ###### Proposition 2. Let Z0∈𝒵subscript𝑍0𝒵Z\_{0}\in\mathcal{Z}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z and Q0∈𝒬subscript𝑄0𝒬Q\_{0}\in\mathcal{Q}italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q, and suppose that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. If | | | | | --- | --- | --- | | | Zt+1:=T𝒟πZtQt+1:=TπQt,formulae-sequenceassignsubscript𝑍𝑡1subscriptsuperscript𝑇𝜋𝒟subscript𝑍𝑡assignsubscript𝑄𝑡1superscript𝑇𝜋subscript𝑄𝑡Z\_{t+1}:=T^{\pi}\_{\mathcal{D}}Z\_{t}\quad Q\_{t+1}:=T^{\pi}Q\_{t},italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , | | then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. ###### Proof. By induction. By construction this is the case for Z0,Q0subscript𝑍0subscript𝑄0Z\_{0},Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Suppose it holds for timestep t𝑡titalic\_t. Then for timestep t+1𝑡1t+1italic\_t + 1, we have: | | | | | | --- | --- | --- | --- | | | 𝔼𝔼\displaystyle\mathbb{E}blackboard\_E | [Zt+1(x,a)]=𝔼[R(x,a)+Zt(X′,A′)]delimited-[]subscript𝑍𝑡1𝑥𝑎𝔼delimited-[]𝑅𝑥𝑎subscript𝑍𝑡superscript𝑋′superscript𝐴′\displaystyle[Z\_{t+1}(x,a)]=\mathbb{E}[R(x,a)+Z\_{t}(X^{\prime},A^{\prime})][ italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ] = blackboard\_E [ italic\_R ( italic\_x , italic\_a ) + italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_X start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] | | | | | =𝔼[R(x,a)]+γ∑x′,a′P(x′|x,a)π(a′|x′)𝔼[Zt(x′,a′)]absent𝔼delimited-[]𝑅𝑥𝑎𝛾subscriptsuperscript𝑥′superscript𝑎′𝑃conditionalsuperscript𝑥′𝑥𝑎𝜋conditionalsuperscript𝑎′superscript𝑥′𝔼delimited-[]subscript𝑍𝑡superscript𝑥′superscript𝑎′\displaystyle=\mathbb{E}[R(x,a)]+\gamma\sum\_{x^{\prime},a^{\prime}}P(x^{\prime}|x,a)\pi(a^{\prime}|x^{\prime})\mathbb{E}[Z\_{t}(x^{\prime},a^{\prime})]= blackboard\_E [ italic\_R ( italic\_x , italic\_a ) ] + italic\_γ ∑ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_P ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_x , italic\_a ) italic\_π ( italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) blackboard\_E [ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] | | | | | =𝔼[R(x,a)]+γ∑x′,a′P(x′|x,a)π(a′|x′)Qt(x′,a′)absent𝔼delimited-[]𝑅𝑥𝑎𝛾subscriptsuperscript𝑥′superscript𝑎′𝑃conditionalsuperscript𝑥′𝑥𝑎𝜋conditionalsuperscript𝑎′superscript𝑥′subscript𝑄𝑡superscript𝑥′superscript𝑎′\displaystyle=\mathbb{E}[R(x,a)]+\gamma\sum\_{x^{\prime},a^{\prime}}P(x^{\prime}|x,a)\pi(a^{\prime}|x^{\prime})Q\_{t}(x^{\prime},a^{\prime})= blackboard\_E [ italic\_R ( italic\_x , italic\_a ) ] + italic\_γ ∑ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_P ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_x , italic\_a ) italic\_π ( italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT | italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | | | | | =Qt+1(x,a)∎absentsubscript𝑄𝑡1𝑥𝑎\displaystyle=Q\_{t+1}(x,a)\qed= italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) italic\_∎ | | ###### Proposition 3. Suppose that the finite support brackets the set of attainable value distributions, in the sense that z1≤−Rmax1−γsubscript𝑧1subscript𝑅max1𝛾z\_{1}\leq-\frac{R\_{\textsc{max}}}{1-\gamma}italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ≤ - divide start\_ARG italic\_R start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT end\_ARG start\_ARG 1 - italic\_γ end\_ARG and zK≥Rmax1−γsubscript𝑧𝐾subscript𝑅max1𝛾z\_{K}\geq\frac{R\_{\textsc{max}}}{1-\gamma}italic\_z start\_POSTSUBSCRIPT italic\_K end\_POSTSUBSCRIPT ≥ divide start\_ARG italic\_R start\_POSTSUBSCRIPT max end\_POSTSUBSCRIPT end\_ARG start\_ARG 1 - italic\_γ end\_ARG. Define the projected distributional operator | | | | | --- | --- | --- | | | TCπ:=ΠCT𝒟π.assignsuperscriptsubscript𝑇𝐶𝜋subscriptΠ𝐶subscriptsuperscript𝑇𝜋𝒟T\_{C}^{\pi}:=\Pi\_{C}T^{\pi}\_{\mathcal{D}}.italic\_T start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT := roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_D end\_POSTSUBSCRIPT . | | Suppose Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, for Z0∈𝒵z,Q0∈𝒬formulae-sequencesubscript𝑍0subscript𝒵𝑧subscript𝑄0𝒬Z\_{0}\in\mathcal{Z}\_{z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q. If | | | | | --- | --- | --- | | | Zt+1:=TCπZtQt+1:=TπQt,formulae-sequenceassignsubscript𝑍𝑡1subscriptsuperscript𝑇𝜋𝐶subscript𝑍𝑡assignsubscript𝑄𝑡1superscript𝑇𝜋subscript𝑄𝑡Z\_{t+1}:=T^{\pi}\_{C}Z\_{t}\quad Q\_{t+1}:=T^{\pi}Q\_{t},italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , | | then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. ###### Proof. Again, we proceed by induction and observe that the equality is true by assumption for t=0𝑡0t=0italic\_t = 0. We use the result from Proposition [1](#Thmprop1a "Proposition 1. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning"). Then since 𝔼[TπZt(x,a)]=TπQt(x,a)𝔼delimited-[]superscript𝑇𝜋subscript𝑍𝑡𝑥𝑎superscript𝑇𝜋subscript𝑄𝑡𝑥𝑎\mathbb{E}[T^{\pi}Z\_{t}(x,a)]=T^{\pi}Q\_{t}(x,a)blackboard\_E [ italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ] = italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) we have that | | | | | | --- | --- | --- | --- | | | 𝔼[TCπZt(x,a)]𝔼delimited-[]subscriptsuperscript𝑇𝜋𝐶subscript𝑍𝑡𝑥𝑎\displaystyle\mathbb{E}[T^{\pi}\_{C}Z\_{t}(x,a)]blackboard\_E [ italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ] | =𝔼[TπZt(x,a)]absent𝔼delimited-[]superscript𝑇𝜋subscript𝑍𝑡𝑥𝑎\displaystyle=\mathbb{E}[T^{\pi}Z\_{t}(x,a)]= blackboard\_E [ italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ] | | | | | =TπQt(x,a)absentsuperscript𝑇𝜋subscript𝑄𝑡𝑥𝑎\displaystyle=T^{\pi}Q\_{t}(x,a)= italic\_T start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) | | which proves the proposition. ∎ ###### Corollary 1. The same proof can be used to show that the optimality operator T\*superscript𝑇T^{\*}italic\_T start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT induces equivalent behaviour in distributional and expected algorithms. ###### Proposition 4. Suppose that Z0∈𝒵,Q0∈𝒬formulae-sequencesubscript𝑍0𝒵subscript𝑄0𝒬Z\_{0}\in\mathcal{Z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q and Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Given a sample transition (xt,at,rt,xt+1,at+1)subscript𝑥𝑡subscript𝑎𝑡subscript𝑟𝑡subscript𝑥𝑡1subscript𝑎𝑡1(x\_{t},a\_{t},r\_{t},x\_{t+1},a\_{t+1})( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) consider the mixture update on the law of Ztsubscript𝑍𝑡Z\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, denoted PZtsubscript𝑃subscript𝑍𝑡P\_{Z\_{t}}italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT | | | | | --- | --- | --- | | | PZt+1(x,a):={(1−αt)PZt(x,a)+αtPZt′(xt,at)PZt(x,a)if x,a≠xt,atassignsubscript𝑃subscript𝑍𝑡1𝑥𝑎cases1subscript𝛼𝑡subscript𝑃subscript𝑍𝑡𝑥𝑎subscript𝛼𝑡subscript𝑃superscriptsubscript𝑍𝑡′subscript𝑥𝑡subscript𝑎𝑡missing-subexpressionsubscript𝑃subscript𝑍𝑡𝑥𝑎formulae-sequenceif 𝑥𝑎 subscript𝑥𝑡subscript𝑎𝑡P\_{Z\_{t+1}}(x,a):=\left\{\begin{array}[]{ll}(1-\alpha\_{t})P\_{Z\_{t}}(x,a)+\alpha\_{t}P\_{Z\_{t}^{\prime}}(x\_{t},a\_{t})&\\ P\_{Z\_{t}}(x,a)&\text{if }x,a\neq x\_{t},a\_{t}\end{array}\right.italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) := { start\_ARRAY start\_ROW start\_CELL ( 1 - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) end\_CELL start\_CELL if italic\_x , italic\_a ≠ italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARRAY | | and the SARSA update | | | | | --- | --- | --- | | | Qt+1(xt,at):={Qt(xt,at)+αtδtQt(x,a)if x,a≠xt,atassignsubscript𝑄𝑡1subscript𝑥𝑡subscript𝑎𝑡casessubscript𝑄𝑡subscript𝑥𝑡subscript𝑎𝑡subscript𝛼𝑡subscript𝛿𝑡missing-subexpressionsubscript𝑄𝑡𝑥𝑎formulae-sequenceif 𝑥𝑎 subscript𝑥𝑡subscript𝑎𝑡Q\_{t+1}(x\_{t},a\_{t}):=\left\{\begin{array}[]{ll}Q\_{t}(x\_{t},a\_{t})+\alpha\_{t}\delta\_{t}&\\ Q\_{t}(x,a)&\text{if }x,a\neq x\_{t},a\_{t}\end{array}\right.italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) := { start\_ARRAY start\_ROW start\_CELL italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) end\_CELL start\_CELL if italic\_x , italic\_a ≠ italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_CELL end\_ROW end\_ARRAY | | where δt:=(rt+γQt(xt+1,at+1)−Qt(xt,at))assignsubscript𝛿𝑡subscript𝑟𝑡𝛾subscript𝑄𝑡subscript𝑥𝑡1subscript𝑎𝑡1subscript𝑄𝑡subscript𝑥𝑡subscript𝑎𝑡\delta\_{t}:=(r\_{t}+\gamma Q\_{t}(x\_{t+1},a\_{t+1})-Q\_{t}(x\_{t},a\_{t}))italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT := ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) - italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ), then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. ###### Proof. We proceed again by induction. We let Zt(x,a)subscript𝑍𝑡𝑥𝑎Z\_{t}(x,a)italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) be the return distribution at time t𝑡titalic\_t. By assumption, 𝔼[Z0(x,a)]=Q0(x,a)𝔼delimited-[]subscript𝑍0𝑥𝑎subscript𝑄0𝑥𝑎\mathbb{E}[Z\_{0}(x,a)]=Q\_{0}(x,a)blackboard\_E [ italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ] = italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) for all x,a𝑥𝑎x,aitalic\_x , italic\_a. We suppose that each target and predicted distribution has finite support, and that the union of the supports for Ztsubscript𝑍𝑡Z\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and Zt′superscriptsubscript𝑍𝑡′Z\_{t}^{\prime}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT has size ktsubscript𝑘𝑡k\_{t}italic\_k start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Now, for the induction step: | | | | | | --- | --- | --- | --- | | | 𝔼𝔼\displaystyle\mathbb{E}blackboard\_E | (Zt+1(xt,at))=∑i=1ktPZt+1(zi)zisubscript𝑍𝑡1subscript𝑥𝑡subscript𝑎𝑡superscriptsubscript𝑖1subscript𝑘𝑡subscript𝑃subscript𝑍𝑡1subscript𝑧𝑖subscript𝑧𝑖\displaystyle(Z\_{t+1}(x\_{t},a\_{t}))=\sum\_{i=1}^{k\_{t}}P\_{Z\_{t+1}}(z\_{i})z\_{i}( italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) = ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | | | | =∑i=1kt(1−αt)PZt(zi)zi+αtPZt′(zi)ziabsentsuperscriptsubscript𝑖1subscript𝑘𝑡1subscript𝛼𝑡subscript𝑃subscript𝑍𝑡subscript𝑧𝑖subscript𝑧𝑖subscript𝛼𝑡subscript𝑃superscriptsubscript𝑍𝑡′subscript𝑧𝑖subscript𝑧𝑖\displaystyle=\sum\_{i=1}^{k\_{t}}(1-\alpha\_{t})P\_{Z\_{t}}(z\_{i})z\_{i}+\alpha\_{t}P\_{Z\_{t}^{\prime}}(z\_{i})z\_{i}= ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( 1 - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | | | | =(1−αt)∑i=1ktPZt(zi)zi+αt∑i=1ktPZt′(zi)ziabsent1subscript𝛼𝑡superscriptsubscript𝑖1subscript𝑘𝑡subscript𝑃subscript𝑍𝑡subscript𝑧𝑖subscript𝑧𝑖subscript𝛼𝑡superscriptsubscript𝑖1subscript𝑘𝑡subscript𝑃superscriptsubscript𝑍𝑡′subscript𝑧𝑖subscript𝑧𝑖\displaystyle=(1-\alpha\_{t})\sum\_{i=1}^{k\_{t}}P\_{Z\_{t}}(z\_{i})z\_{i}+\alpha\_{t}\sum\_{i=1}^{k\_{t}}P\_{Z\_{t}^{\prime}}(z\_{i})z\_{i}= ( 1 - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_k start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | | | | | =(1−αt)𝔼[Zt(xt,at)]+αt𝔼[rt+1+γZt(xt+1,at+1)]absent1subscript𝛼𝑡𝔼delimited-[]subscript𝑍𝑡subscript𝑥𝑡subscript𝑎𝑡subscript𝛼𝑡𝔼delimited-[]subscript𝑟𝑡1𝛾subscript𝑍𝑡subscript𝑥𝑡1subscript𝑎𝑡1\displaystyle=(1-\alpha\_{t})\mathbb{E}[Z\_{t}(x\_{t},a\_{t})]+\alpha\_{t}\mathbb{E}[r\_{t+1}+\gamma Z\_{t}(x\_{t+1},a\_{t+1})]= ( 1 - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) blackboard\_E [ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT blackboard\_E [ italic\_r start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT + italic\_γ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] | | | | | =(1−αt)Qt(xt,at)+αt[rt+γQt(xt+1,at+1)]absent1subscript𝛼𝑡subscript𝑄𝑡subscript𝑥𝑡subscript𝑎𝑡subscript𝛼𝑡delimited-[]subscript𝑟𝑡𝛾subscript𝑄𝑡subscript𝑥𝑡1subscript𝑎𝑡1\displaystyle=(1-\alpha\_{t})Q\_{t}(x\_{t},a\_{t})+\alpha\_{t}[r\_{t}+\gamma Q\_{t}(x\_{t+1},a\_{t+1})]= ( 1 - italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT [ italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ] | | | | | =Qt+1(xt,at)∎absentsubscript𝑄𝑡1subscript𝑥𝑡subscript𝑎𝑡\displaystyle=Q\_{t+1}(x\_{t},a\_{t})\qed= italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_∎ | | ###### Proposition 5. Suppose that Z0∈𝒵z,Q0∈𝒬formulae-sequencesubscript𝑍0subscript𝒵𝑧subscript𝑄0𝒬Z\_{0}\in\mathcal{Z}\_{z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_z end\_POSTSUBSCRIPT , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q, Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, that 𝐳𝐳\mathbf{z}bold\_z brackets the set of attainable value distributions, and PZt′subscript𝑃superscriptsubscript𝑍𝑡normal-′P\_{Z\_{t}^{\prime}}italic\_P start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT in Prop. [4](#Thmprop4a "Proposition 4. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") is replaced by the projected target PΠCZt′subscript𝑃subscriptnormal-Π𝐶superscriptsubscript𝑍𝑡normal-′P\_{\Pi\_{C}Z\_{t}^{\prime}}italic\_P start\_POSTSUBSCRIPT roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT. Then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. ###### Proof. Follows from propositions 1 and 4. ∎ ###### Proposition 6. Suppose that the categorical support 𝐳𝐳\mathbf{z}bold\_z is c𝑐citalic\_c-spaced. Let Z0∈𝒵,Q0∈𝒬formulae-sequencesubscript𝑍0𝒵subscript𝑄0𝒬Z\_{0}\in\mathcal{Z},Q\_{0}\in\mathcal{Q}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z , italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q be such that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Suppose that Qt+1subscript𝑄𝑡1Q\_{t+1}italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is updated according to the SARSA update with step-size αtsubscript𝛼𝑡\alpha\_{t}italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Let Zt′superscriptsubscript𝑍𝑡normal-′Z\_{t}^{\prime}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT be given by ΠC(rt+γZt(xt+1,at+1))subscriptnormal-Π𝐶subscript𝑟𝑡𝛾subscript𝑍𝑡subscript𝑥𝑡1subscript𝑎𝑡1\Pi\_{C}(r\_{t}+\gamma Z\_{t}(x\_{t+1},a\_{t+1}))roman\_Π start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ( italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_γ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ). Consider the CDF gradient update rule | | | | | --- | --- | --- | | | FZt+1(x,a):={FZt(x,a)+αt′∇Fℓ22(Zt(xt,at),Zt′(xt,at))FZt(x,a)if x,a≠xt,at.assignsubscript𝐹subscript𝑍𝑡1𝑥𝑎casessubscript𝐹subscript𝑍𝑡𝑥𝑎superscriptsubscript𝛼𝑡′subscript∇𝐹superscriptsubscriptℓ22subscript𝑍𝑡subscript𝑥𝑡subscript𝑎𝑡superscriptsubscript𝑍𝑡′subscript𝑥𝑡subscript𝑎𝑡missing-subexpressionsubscript𝐹subscript𝑍𝑡𝑥𝑎formulae-sequenceif 𝑥𝑎 subscript𝑥𝑡subscript𝑎𝑡\displaystyle F\_{Z\_{t+1}}(x,a):=\left\{\begin{array}[]{ll}F\_{Z\_{t}}(x,a)+\alpha\_{t}^{\prime}\nabla\_{F}\ell\_{2}^{2}(Z\_{t}(x\_{t},a\_{t}),Z\_{t}^{\prime}(x\_{t},a\_{t}))&\\ F\_{Z\_{t}}(x,a)&\text{if }x,a\neq x\_{t},a\_{t}.\end{array}\right.italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) := { start\_ARRAY start\_ROW start\_CELL italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) + italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∇ start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) end\_CELL start\_CELL if italic\_x , italic\_a ≠ italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT . end\_CELL end\_ROW end\_ARRAY | | If αt′=αt2csuperscriptsubscript𝛼𝑡normal-′subscript𝛼𝑡2𝑐\alpha\_{t}^{\prime}=\tfrac{\alpha\_{t}}{2c}italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = divide start\_ARG italic\_α start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_ARG start\_ARG 2 italic\_c end\_ARG, then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. ###### Proof. We first note that ∇Fℓ22(F,F′)=2c(F′−F)subscript∇𝐹superscriptsubscriptℓ22𝐹superscript𝐹′2𝑐superscript𝐹′𝐹\nabla\_{F}\ell\_{2}^{2}(F,F^{\prime})=2c(F^{\prime}-F)∇ start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_F , italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = 2 italic\_c ( italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_F ). | | | | | | --- | --- | --- | --- | | | ∇Fℓ22(F,F′)[i]subscript∇𝐹superscriptsubscriptℓ22𝐹superscript𝐹′delimited-[]𝑖\displaystyle\nabla\_{F}\ell\_{2}^{2}(F,F^{\prime})[i]∇ start\_POSTSUBSCRIPT italic\_F end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_F , italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) [ italic\_i ] | =∂∂Fi∑j=1K−1c(F′(zj)−F(zj))2absentsubscript𝐹𝑖superscriptsubscript𝑗1𝐾1𝑐superscriptsuperscript𝐹′subscript𝑧𝑗𝐹subscript𝑧𝑗2\displaystyle=\frac{\partial}{\partial F\_{i}}\sum\_{j=1}^{K-1}c(F^{\prime}(z\_{j})-F(z\_{j}))^{2}= divide start\_ARG ∂ end\_ARG start\_ARG ∂ italic\_F start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_K - 1 end\_POSTSUPERSCRIPT italic\_c ( italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) - italic\_F ( italic\_z start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | | | | =∂∂Fic(F′(zi)−F(zi))2absentsubscript𝐹𝑖𝑐superscriptsuperscript𝐹′subscript𝑧𝑖𝐹subscript𝑧𝑖2\displaystyle=\frac{\partial}{\partial F\_{i}}c(F^{\prime}(z\_{i})-F(z\_{i}))^{2}= divide start\_ARG ∂ end\_ARG start\_ARG ∂ italic\_F start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG italic\_c ( italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - italic\_F ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | | | | | =2c(F′(zi)−F(zi))absent2𝑐superscript𝐹′subscript𝑧𝑖𝐹subscript𝑧𝑖\displaystyle=2c(F^{\prime}(z\_{i})-F(z\_{i}))= 2 italic\_c ( italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - italic\_F ( italic\_z start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ) | | Thus the gradient update in this case is simply a mixture update with a different step size, and the result follows from Proposition 3. ∎ ###### Proposition 7. Suppose the CDF gradient in update rule of Prop. [6](#Thmprop6a "Proposition 6. ‣ 7 Proofs of main results ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") is replaced by the PDF gradient ∇P(Zt,Zt′)subscriptnormal-∇𝑃subscript𝑍𝑡superscriptsubscript𝑍𝑡normal-′\nabla\_{P}(Z\_{t},Z\_{t}^{\prime})∇ start\_POSTSUBSCRIPT italic\_P end\_POSTSUBSCRIPT ( italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ). Then for each choice of step-size α′superscript𝛼normal-′\alpha^{\prime}italic\_α start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT there exists an MDP M𝑀Mitalic\_M and a time step t∈ℕ𝑡ℕt\in\mathbb{N}italic\_t ∈ blackboard\_N for which Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT but Zt≠𝔼Qtsubscript𝑍𝑡𝔼subscript𝑄𝑡Z\_{t}\overset{\mathbb{E}}{\neq}Q\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG ≠ end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. ###### Proof. Suppose we have a support 𝐳=(0,1,2)𝐳012\textbf{z}=(0,1,2)z = ( 0 , 1 , 2 ) and two CDFs: | | | | | --- | --- | --- | | | F′=(12,12,1)superscript𝐹′12121F^{\prime}=(\frac{1}{2},\frac{1}{2},1)italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = ( divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG , divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG , 1 ) | | and | | | | | --- | --- | --- | | | F=(13,23,1).𝐹13231F=(\frac{1}{3},\frac{2}{3},1).italic\_F = ( divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG , divide start\_ARG 2 end\_ARG start\_ARG 3 end\_ARG , 1 ) . | | We note that the expected values of F𝐹Fitalic\_F and F′superscript𝐹′F^{\prime}italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT are both 1. Taking the gradient of the squared Cramér distance between the two distributions with respect to the PMF of the first gives ∇pℓ22(F,F′)=(0,−13,0)subscript∇𝑝superscriptsubscriptℓ22𝐹superscript𝐹′0130\nabla\_{p}\ell\_{2}^{2}(F,F^{\prime})=(0,-\frac{1}{3},0)∇ start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_F , italic\_F start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = ( 0 , - divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG , 0 ). Now, when we consider | | | | | --- | --- | --- | | | P+α∇ℓ22(P,P′)≈(13,13−α3,13)𝑃𝛼∇superscriptsubscriptℓ22𝑃superscript𝑃′1313𝛼313P+\alpha\nabla\ell\_{2}^{2}(P,P^{\prime})\approx(\frac{1}{3},\frac{1}{3}-\frac{\alpha}{3},\frac{1}{3})italic\_P + italic\_α ∇ roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_P , italic\_P start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ≈ ( divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG , divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG - divide start\_ARG italic\_α end\_ARG start\_ARG 3 end\_ARG , divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG ) | | we can immediately observe that this is not a probability distribution as it has expected value 1−α31𝛼31-\frac{\alpha}{3}1 - divide start\_ARG italic\_α end\_ARG start\_ARG 3 end\_ARG. The expectations of P𝑃Pitalic\_P and P′superscript𝑃′P^{\prime}italic\_P start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT are both 1, so a Cramér distance update w.r.t. the CDFs would give a new distribution with expectation 1, as would an update which only looked at their expectations. ∎ ###### Proposition 8. Let Z0∈𝒵ϕsubscript𝑍0subscript𝒵italic-ϕZ\_{0}\in\mathcal{Z}\_{\phi}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Z start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT and Q0∈𝒬ϕsubscript𝑄0subscript𝒬italic-ϕQ\_{0}\in\mathcal{Q}\_{\phi}italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ caligraphic\_Q start\_POSTSUBSCRIPT italic\_ϕ end\_POSTSUBSCRIPT, and suppose that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. Let Wt,θtsubscript𝑊𝑡subscript𝜃𝑡W\_{t},\theta\_{t}italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_θ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT respectively denote the weights corresponding to Ztsubscript𝑍𝑡Z\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and Qtsubscript𝑄𝑡Q\_{t}italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. If Zt+1subscript𝑍𝑡1Z\_{t+1}italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is computed from the semi-gradient update rule | | | | | --- | --- | --- | | | Wt+1:=Wt+α(FZt′−Wtϕxt,at)ϕxt,atTassignsubscript𝑊𝑡1subscript𝑊𝑡𝛼subscript𝐹superscriptsubscript𝑍𝑡′subscript𝑊𝑡subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇W\_{t+1}:=W\_{t}+\alpha(F\_{Z\_{t}^{\prime}}-W\_{t}\phi\_{x\_{t},a\_{t}})\phi\_{x\_{t},a\_{t}}^{T}italic\_W start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_α ( italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT - italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT | | and Qt+1subscript𝑄𝑡1Q\_{t+1}italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT is computed according to Equation [2](#S2.E2 "2 ‣ 2 Background ‣ A Comparative Analysis of Expected and Distributional Reinforcement Learning") with the same step-size α𝛼\alphaitalic\_α, then also Zt=𝔼Qt∀t∈ℕsubscript𝑍𝑡𝔼subscript𝑄𝑡for-all𝑡ℕZ\_{t}\overset{\mathbb{E}}{=}Q\_{t}\;\forall\;t\in\mathbb{N}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∀ italic\_t ∈ blackboard\_N. ###### Proof. We first note that we can compute the expected value of the distribution Wϕxt,at𝑊subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡W\phi\_{x\_{t},a\_{t}}italic\_W italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT directly by using the linear map zTC−1superscript𝑧𝑇superscript𝐶1z^{T}C^{-1}italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT, where C𝐶Citalic\_C is a lower-triangular all-ones matrix (see (?) for details). So | | | | | --- | --- | --- | | | 𝔼[Zt(xt,at)]=zTC−1Wϕxt,at.𝔼delimited-[]subscript𝑍𝑡subscript𝑥𝑡subscript𝑎𝑡superscript𝑧𝑇superscript𝐶1𝑊subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡\mathbb{E}[Z\_{t}(x\_{t},a\_{t})]=z^{T}C^{-1}W\phi\_{x\_{t},a\_{t}}.blackboard\_E [ italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] = italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_W italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT . | | We let Ftsubscript𝐹𝑡F\_{t}italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT and vtsubscript𝑣𝑡v\_{t}italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT denote the distributional and expected TD targets respectively. Now, we observe that for any state-action pair (x′,a′)superscript𝑥′superscript𝑎′(x^{\prime},a^{\prime})( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ): | | | | | | --- | --- | --- | --- | | | 𝔼[Zt+1(x′,a′)]𝔼delimited-[]subscript𝑍𝑡1superscript𝑥′superscript𝑎′\displaystyle\mathbb{E}[Z\_{t+1}(x^{\prime},a^{\prime})]blackboard\_E [ italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ] | =zTC−1Wt+1ϕx′,a′absentsuperscript𝑧𝑇superscript𝐶1subscript𝑊𝑡1subscriptitalic-ϕsuperscript𝑥′superscript𝑎′\displaystyle=z^{T}C^{-1}W\_{t+1}\phi\_{x^{\prime},a^{\prime}}= italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_W start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT | | | | | =zTC−1(Wtϕx′,a′)absentsuperscript𝑧𝑇superscript𝐶1subscript𝑊𝑡subscriptitalic-ϕsuperscript𝑥′superscript𝑎′\displaystyle=z^{T}C^{-1}(W\_{t}\phi\_{x^{\prime},a^{\prime}})= italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ( italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) | | | | | +αzTC−1(Ft−Wtϕxt,at)ϕxt,atTϕx′,a′𝛼superscript𝑧𝑇superscript𝐶1subscript𝐹𝑡subscript𝑊𝑡subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇subscriptitalic-ϕsuperscript𝑥′superscript𝑎′\displaystyle+\alpha z^{T}C^{-1}(F\_{t}-W\_{t}\phi\_{x\_{t},a\_{t}})\phi\_{x\_{t},a\_{t}}^{T}\phi\_{x^{\prime},a^{\prime}}+ italic\_α italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ( italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT | | We note that by our assumption that all predicted distributions sum to 1, the expected value of the target signed measure Ftsubscript𝐹𝑡F\_{t}italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT given by zTC−1Ftsuperscript𝑧𝑇superscript𝐶1subscript𝐹𝑡z^{T}C^{-1}F\_{t}italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is equal to the target value function vtsubscript𝑣𝑡v\_{t}italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. So to prove equality of expectation, it suffices to show | | | | | --- | --- | --- | | | αzTC−1(Ft−Wtϕxt,at)ϕxt,atTϕx′,a′=𝛼superscript𝑧𝑇superscript𝐶1subscript𝐹𝑡subscript𝑊𝑡subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇subscriptitalic-ϕsuperscript𝑥′superscript𝑎′absent\displaystyle\alpha z^{T}C^{-1}(F\_{t}-W\_{t}\phi\_{x\_{t},a\_{t}})\phi\_{x\_{t},a\_{t}}^{T}\phi\_{x^{\prime},a^{\prime}}=italic\_α italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ( italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT - italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT = | | | | α(θTϕxt,at−vt)ϕxt,atTϕx′,a′.𝛼superscript𝜃𝑇subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡subscript𝑣𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇subscriptitalic-ϕsuperscript𝑥′superscript𝑎′\displaystyle\alpha(\theta^{T}\phi\_{x\_{t},a\_{t}}-v\_{t})\phi\_{x\_{t},a\_{t}}^{T}\phi\_{x^{\prime},a^{\prime}}.italic\_α ( italic\_θ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT . | | We proceed as follows. | | | | | | --- | --- | --- | --- | | | | zTC−1((Wϕxt,at−Ft)ϕxt,atT)ϕx′,a′superscript𝑧𝑇superscript𝐶1𝑊subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡subscript𝐹𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇subscriptitalic-ϕsuperscript𝑥′superscript𝑎′\displaystyle z^{T}C^{-1}((W\phi\_{x\_{t},a\_{t}}-F\_{t})\phi\_{x\_{t},a\_{t}}^{T})\phi\_{x^{\prime},a^{\prime}}italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT ( ( italic\_W italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT | | | | | =(zTC−1Wϕxt,at−zTC−1Ft)ϕxt,atTϕx′,a′absentsuperscript𝑧𝑇superscript𝐶1𝑊subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡superscript𝑧𝑇superscript𝐶1subscript𝐹𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇subscriptitalic-ϕsuperscript𝑥′superscript𝑎′\displaystyle=(z^{T}C^{-1}W\phi\_{x\_{t},a\_{t}}-z^{T}C^{-1}F\_{t})\phi\_{x\_{t},a\_{t}}^{T}\phi\_{x^{\prime},a^{\prime}}= ( italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_W italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT | | | By assumption Zt=𝔼Qtsubscript𝑍𝑡𝔼subscript𝑄𝑡Z\_{t}\overset{\mathbb{E}}{=}Q\_{t}italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, and so zTC−1Wϕxt,at=θTϕxt,atsuperscript𝑧𝑇superscript𝐶1𝑊subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡superscript𝜃𝑇subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡z^{T}C^{-1}W\phi\_{x\_{t},a\_{t}}=\theta^{T}\phi\_{x\_{t},a\_{t}}italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_W italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_θ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. Further, as we also assume zTC−1Ft=vtsuperscript𝑧𝑇superscript𝐶1subscript𝐹𝑡subscript𝑣𝑡z^{T}C^{-1}F\_{t}=v\_{t}italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT, we get that the difference becomes identical to the Q-value update. | | | | =(θTϕxt,at−vt)ϕxt,atTϕx′,a′∎absentsuperscript𝜃𝑇subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡subscript𝑣𝑡superscriptsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡𝑇subscriptitalic-ϕsuperscript𝑥′superscript𝑎′\displaystyle=(\theta^{T}\phi\_{x\_{t},a\_{t}}-v\_{t})\phi\_{x\_{t},a\_{t}}^{T}\phi\_{x^{\prime},a^{\prime}}\qed= ( italic\_θ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT - italic\_v start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_∎ | | ###### Proposition 9. There exists a (nonlinear) representation of the cumulative distribution function parametrized by W∈ℝK×d𝑊superscriptℝ𝐾𝑑W\in\mathbb{R}^{K\times d}italic\_W ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_K × italic\_d end\_POSTSUPERSCRIPT such that Z0=𝔼Q0subscript𝑍0𝔼subscript𝑄0Z\_{0}\overset{\mathbb{E}}{=}Q\_{0}italic\_Z start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG = end\_ARG italic\_Q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT but after applying the semi-gradient update rule | | | | | --- | --- | --- | | | Wt+1:=Wt+α∇Wℓ22(ψ(W,ϕ(xt,at)),FZt′),assignsubscript𝑊𝑡1subscript𝑊𝑡𝛼subscript∇𝑊superscriptsubscriptℓ22𝜓𝑊italic-ϕsubscript𝑥𝑡subscript𝑎𝑡subscript𝐹superscriptsubscript𝑍𝑡′W\_{t+1}:=W\_{t}+\alpha\nabla\_{W}\ell\_{2}^{2}(\psi(W,\phi(x\_{t},a\_{t})),F\_{Z\_{t}^{\prime}}),italic\_W start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT := italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_α ∇ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_ψ ( italic\_W , italic\_ϕ ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) , italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ) , | | where FZt′subscript𝐹superscriptsubscript𝑍𝑡normal-′F\_{Z\_{t}^{\prime}}italic\_F start\_POSTSUBSCRIPT italic\_Z start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT is the cumulative distribution function of the projected Bellman target, we have Z1≠𝔼Q1subscript𝑍1𝔼subscript𝑄1Z\_{1}\overset{\mathbb{E}}{\neq}Q\_{1}italic\_Z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT overblackboard\_E start\_ARG ≠ end\_ARG italic\_Q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT. n We present a concrete example where this is the case. For simplicity, the example will be one in which the target distribution is equal in expectation to the predicted distribution, but has a different law. Thus the update to the expected parameters will be zero, but the update to the distributional parameters will be non-zero, and if this update changes the expected value of the predicted distribution, then the new distributional prediction will disagree with the new expected prediction. We will denote by σ(y)𝜎𝑦\sigma(y)italic\_σ ( italic\_y ) the sigmoid function | | | | | --- | --- | --- | | | σ(y)=11+e−y.𝜎𝑦11superscript𝑒𝑦\sigma(y)=\frac{1}{1+e^{-y}}.italic\_σ ( italic\_y ) = divide start\_ARG 1 end\_ARG start\_ARG 1 + italic\_e start\_POSTSUPERSCRIPT - italic\_y end\_POSTSUPERSCRIPT end\_ARG . | | Let 𝐳=(−1,0,1).𝐳101\textbf{z}=(-1,0,1).z = ( - 1 , 0 , 1 ) . Let W=(w1,w2)𝑊subscript𝑤1subscript𝑤2W=(w\_{1},w\_{2})italic\_W = ( italic\_w start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_w start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ), and set | | | | | --- | --- | --- | | | ψW(x):=[σ(w1x1),σ(w2x2),1]assignsubscript𝜓𝑊𝑥𝜎subscript𝑤1subscript𝑥1𝜎subscript𝑤2subscript𝑥21\psi\_{W}(x):=[\sigma(w\_{1}x\_{1}),\sigma(w\_{2}x\_{2}),1]italic\_ψ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT ( italic\_x ) := [ italic\_σ ( italic\_w start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_σ ( italic\_w start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , 1 ] | | corresponding to F(−1),F(0),F(1)𝐹1𝐹0𝐹1F(-1),F(0),F(1)italic\_F ( - 1 ) , italic\_F ( 0 ) , italic\_F ( 1 ), with | | | | | --- | --- | --- | | | W0=[−ln⁡(2),−ln⁡(1/2)/2].subscript𝑊02122W\_{0}=[-\ln(2),-\ln(1/2)/2].italic\_W start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = [ - roman\_ln ( 2 ) , - roman\_ln ( 1 / 2 ) / 2 ] . | | Set | | | | | --- | --- | --- | | | ψθ(ϕx,a):=zTC−1ψW(ϕx,a)assignsubscript𝜓𝜃subscriptitalic-ϕ𝑥𝑎superscript𝑧𝑇superscript𝐶1subscript𝜓𝑊subscriptitalic-ϕ𝑥𝑎\psi\_{\theta}(\phi\_{x,a}):=z^{T}C^{-1}\psi\_{W}(\phi\_{x,a})italic\_ψ start\_POSTSUBSCRIPT italic\_θ end\_POSTSUBSCRIPT ( italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT ) := italic\_z start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_C start\_POSTSUPERSCRIPT - 1 end\_POSTSUPERSCRIPT italic\_ψ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT ( italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT ) | | We sample a transition starting from ϕxt,atsubscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡\phi\_{x\_{t},a\_{t}}italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT and compute target distribution Ftsubscript𝐹𝑡F\_{t}italic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT with values | | | | | --- | --- | --- | | | ϕxt,at=(1,2) and F=[0,1,1]subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡12 and 𝐹011\phi\_{x\_{t},a\_{t}}=(1,2)\text{ and }F=[0,1,1]italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = ( 1 , 2 ) and italic\_F = [ 0 , 1 , 1 ] | | respectively. Then θ𝜃\thetaitalic\_θ remains the same but the expected value of ψ𝜓\psiitalic\_ψ changes when we perform a gradient update. We first calculate the TD(0) semi-gradient with respect to the parameters W𝑊Witalic\_W: | | | | | | --- | --- | --- | --- | | | ∇W[1]subscript∇𝑊1\displaystyle\nabla\_{W}[1]∇ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT [ 1 ] | =(F(−1)−Fψ(−1))∂∂W1(Fψ(−1))absent𝐹1subscript𝐹𝜓1subscript𝑊1subscript𝐹𝜓1\displaystyle=(F(-1)-F\_{\psi}(-1))\frac{\partial}{\partial W\_{1}}(F\_{\psi}(-1))= ( italic\_F ( - 1 ) - italic\_F start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( - 1 ) ) divide start\_ARG ∂ end\_ARG start\_ARG ∂ italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_ARG ( italic\_F start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( - 1 ) ) | | | | | =(0−13)σ(ln⁡(2))(1−σ(ln⁡(2)))(−1)absent013𝜎21𝜎21\displaystyle=(0-\frac{1}{3})\sigma(\ln(2))(1-\sigma(\ln(2)))(-1)= ( 0 - divide start\_ARG 1 end\_ARG start\_ARG 3 end\_ARG ) italic\_σ ( roman\_ln ( 2 ) ) ( 1 - italic\_σ ( roman\_ln ( 2 ) ) ) ( - 1 ) | | | | | =227absent227\displaystyle=\frac{2}{27}= divide start\_ARG 2 end\_ARG start\_ARG 27 end\_ARG | | | | ∇W[2]subscript∇𝑊2\displaystyle\nabla\_{W}[2]∇ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT [ 2 ] | =(F(0)−Fψ(0))∂∂W1(Fψ(0))absent𝐹0subscript𝐹𝜓0subscript𝑊1subscript𝐹𝜓0\displaystyle=(F(0)-F\_{\psi}(0))\frac{\partial}{\partial W\_{1}}(F\_{\psi}(0))= ( italic\_F ( 0 ) - italic\_F start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( 0 ) ) divide start\_ARG ∂ end\_ARG start\_ARG ∂ italic\_W start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_ARG ( italic\_F start\_POSTSUBSCRIPT italic\_ψ end\_POSTSUBSCRIPT ( 0 ) ) | | | | | =(1−23)σ(−ln⁡(2))(1−σ(−ln⁡(2)))(−2)absent123𝜎21𝜎22\displaystyle=(1-\frac{2}{3})\sigma(-\ln(2))(1-\sigma(-\ln(2)))(-2)= ( 1 - divide start\_ARG 2 end\_ARG start\_ARG 3 end\_ARG ) italic\_σ ( - roman\_ln ( 2 ) ) ( 1 - italic\_σ ( - roman\_ln ( 2 ) ) ) ( - 2 ) | | | | | =−427absent427\displaystyle=\frac{-4}{27}= divide start\_ARG - 4 end\_ARG start\_ARG 27 end\_ARG | | Let α=1,Wt+1=Wt+α∇Wℓ22(ψW(ϕx,a),F)formulae-sequence𝛼1subscript𝑊𝑡1subscript𝑊𝑡𝛼subscript∇𝑊superscriptsubscriptℓ22subscript𝜓𝑊subscriptitalic-ϕ𝑥𝑎𝐹\alpha=1,W\_{t+1}=W\_{t}+\alpha\nabla\_{W}\ell\_{2}^{2}(\psi\_{W}(\phi\_{x,a}),F)italic\_α = 1 , italic\_W start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT = italic\_W start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT + italic\_α ∇ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT roman\_ℓ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_ψ start\_POSTSUBSCRIPT italic\_W end\_POSTSUBSCRIPT ( italic\_ϕ start\_POSTSUBSCRIPT italic\_x , italic\_a end\_POSTSUBSCRIPT ) , italic\_F ). Then we claim that the expected value of the new random variable Zt+1(1,2)subscript𝑍𝑡112Z\_{t+1}(1,2)italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( 1 , 2 ) denoted Fψ′(1,2)subscript𝐹superscript𝜓′12F\_{\psi^{\prime}(1,2)}italic\_F start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( 1 , 2 ) end\_POSTSUBSCRIPT is different from the expectation of Fψ(1,2)subscript𝐹𝜓12F\_{\psi(1,2)}italic\_F start\_POSTSUBSCRIPT italic\_ψ ( 1 , 2 ) end\_POSTSUBSCRIPT. To see this, consider: | | | | | | --- | --- | --- | --- | | | 𝔼[Zt+1(1,2)]𝔼delimited-[]subscript𝑍𝑡112\displaystyle\mathbb{E}[Z\_{t+1}(1,2)]blackboard\_E [ italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( 1 , 2 ) ] | =(−1)pψ′(1,2)(z1)+(1)pψ′(1,2)(z3)absent1subscript𝑝superscript𝜓′12subscript𝑧11subscript𝑝superscript𝜓′12subscript𝑧3\displaystyle=(-1)p\_{\psi^{\prime}(1,2)}(z\_{1})+(1)p\_{\psi^{\prime}(1,2)}(z\_{3})= ( - 1 ) italic\_p start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( 1 , 2 ) end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + ( 1 ) italic\_p start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ( 1 , 2 ) end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT ) | | | | | =−Fψ′(z1)+(1−Fψ′(z2))absentsubscript𝐹superscript𝜓′subscript𝑧11subscript𝐹superscript𝜓′subscript𝑧2\displaystyle=-F\_{\psi^{\prime}}(z\_{1})+(1-F\_{\psi^{\prime}}(z\_{2}))= - italic\_F start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + ( 1 - italic\_F start\_POSTSUBSCRIPT italic\_ψ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_z start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) | | | | | =−11+e(ln⁡(2)+2/27)(1)+1absent11superscript𝑒222711\displaystyle=\frac{-1}{1+e^{(\ln(2)+2/27)(1)}}+1= divide start\_ARG - 1 end\_ARG start\_ARG 1 + italic\_e start\_POSTSUPERSCRIPT ( roman\_ln ( 2 ) + 2 / 27 ) ( 1 ) end\_POSTSUPERSCRIPT end\_ARG + 1 | | | | | −11+e(ln⁡(12)/2−4/27)(2)11superscript𝑒1224272\displaystyle-\frac{1}{1+e^{(\ln(\frac{1}{2})/2-4/27)(2)}}- divide start\_ARG 1 end\_ARG start\_ARG 1 + italic\_e start\_POSTSUPERSCRIPT ( roman\_ln ( divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG ) / 2 - 4 / 27 ) ( 2 ) end\_POSTSUPERSCRIPT end\_ARG | | | | | ≈−0.05≠0=𝔼[Qt+1(1,2)]absent0.050𝔼delimited-[]subscript𝑄𝑡112\displaystyle\approx-0.05\neq 0=\mathbb{E}[Q\_{t+1}(1,2)]≈ - 0.05 ≠ 0 = blackboard\_E [ italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( 1 , 2 ) ] | | And so 𝔼[Zt+1(ϕxt,at)]≠Qt+1(ϕxt,at)𝔼delimited-[]subscript𝑍𝑡1subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡subscript𝑄𝑡1subscriptitalic-ϕsubscript𝑥𝑡subscript𝑎𝑡\mathbb{E}[Z\_{t+1}(\phi\_{x\_{t},a\_{t}})]\neq Q\_{t+1}(\phi\_{x\_{t},a\_{t}})blackboard\_E [ italic\_Z start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ] ≠ italic\_Q start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ( italic\_ϕ start\_POSTSUBSCRIPT italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) 8 Additional Experimental Results ---------------------------------- We first present preliminary results from the gridworld experiment described in section 5.1. We set all of our agents to initially predict the same value for each state-action pair on each random seed, and then allow the agents to update their predictions using their respective update rules and take actions according to an ϵitalic-ϵ\epsilonitalic\_ϵ-greedy policy. We note that we no longer constrain all of the agents to take the same trajectory but couple them to all use the same random seed. Thus, if two agents always agree on the optimal action, they will attain the exact same performance in the gridworld. This is what we see in the plot below. Indeed, in the gridworld problem the difference between updating by the gradient of the PMF only marginally alters performance. The agent’s objective in the gridworld environment is to reach the goal state in as few steps as possible, and fewer steps per episode indicates that the agent has learned the most direct route in the graph below. ![Refer to caption](/html/1901.11084/assets/gridworld_stepsperepisode.png) Figure 4: An environment where PMF updates perform well We see a larger disparity between CDF and PDF gradient updates in a 3-state MDP, where notably rewards are significantly less sparse. The agent’s goal in this environment is to take the left action in the leftmost state or the right action in the rightmost state. In the 3-state MDP we relax the randomization coupling slightly and average over 5 runs in the MDP. We observe that although initially the PDF gradient updates perform similarly to the CDF and Q-learning gradient updates, they more often result in sub-optimal trajectories as training progresses. In contrast, the CDF updates continue to produce the same average performance as q-learning. ![Refer to caption](/html/1901.11084/assets/threestate.png) Figure 5: An environment where PMF updates suffer ![Refer to caption](/html/1901.11084/assets/pdf_ep_800.png) ![Refer to caption](/html/1901.11084/assets/cdf_ep_800.png) Figure 6: Predictions of value of goal state in 3-state MDP environment We observe here that this worse performance occurs in conjunction with a predicted ‘distribution’ that does not resemble a probability distribution, having negative probabilities which do not integrate to 1. In contrast, the predictions output by the agent which models the CDF are always proper distributions.
02586995-7c49-4384-a1dd-bc513e00e0f6
trentmkelly/LessWrong-43k
LessWrong
Advice to new Doctors starting practice Hi all, Please read the Disclaimers at the end of the post first, if you're easily offended.   Generalists(general medicine): 1. Get extremely unbeatable at 20 Questions(rationality link). It'll help you make your initial diagnoses(ones based on questions about symptoms) faster and more accurate. 2. Understand probability, bayes theorem and how to apply it** This will help you interpret the test results, you ordered based on the 20 questions. 3.  Understand base rate fallacy, and how to avoid  being over confident. 4. Understand the upsides and downsides of the drugs you prescribe. Know the probabilities of fatal and adverse side-effects and update them with evidence(Bayes' theorem mentioned above) as you try out different brands and combinations. 5. Know the costs and benefits of any treatment and help the patient make a good decision based on the cost-benefit analysis of treatment combined with the probabilities of outcome. 6. Ask and Keep a history of medical records and allergies of the patient and till their grand parents.* 7. Be willing and able to judge, when a patient is better off with a specialist. Try to keep in touch with Doctors nearby and hopeful all types of specialists. 8. Explain the treatment options and pros and cons in easy language to the patients. It'll reduce misunderstandings and eventually dis-satisfaction with the treatment. 9. Resist the urge to treat patients as NPCs. Involve them in the treatment process. 10. Meditate 11. Find a hobby, that you can keep improving on till the end of life. 12. Be aware of the conflict of interest between the patient and the pharmaceutical companies. 13. Have enough research skills to form opinions on base rates/probabilities in different diseases and treatment methods as needed. 14. If you're in a big hospital setup, make sure you've the best hospital administration.  15. Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires get
712b8fcf-1313-43cb-9ef2-b08054c8f9a8
StampyAI/alignment-research-dataset/blogs
Blogs
QACI: the problem of blob location, causality, and counterfactuals QACI: the problem of blob location, causality, and counterfactuals ------------------------------------------------------------------ for [QACI](qaci.html), i intend to use pieces of data (constant-length raw bitstrings, or equivalently bounded natural numbers) to act kind of as "coordinates" or "pointers" around things we care about in the physical world, not just in space-time-timelines but also in *encoding*: a "location" for a blob of data would describe how that piece of data is written on the physical elementary particule structure of a harddrive or bar of memory, in a physics-world being ran by a hypothesis in [solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1), or simply on the [universal program](all-claw-no-world.html). for my purposes, i need to be able to: * locate three things in a world-hypothesis: the question `q`, the answer `r`, and the AI `G`. * filter for locations of `q,r,G` where `q<r<G`, with `<` being a partial order expressing causality — a pretty natural notion in turing machines. * be able to run counterfactual `q'`s and get the resulting counterfactual `r'`s. * possibly, locate `G`'s ([single?](delegated-embedded-agency-decision-theory.html)) action `a` and test for counterfactuals `a'` for the purpose of [embedded agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN), [if we need to solve that](delegated-embedded-agency-decision-theory.html). to put this together, it seems like what we need is a way to locate pieces of data, and to be able to filter for precedence, and to hypothesize counterfactuals. the format that i expect this to look like is a set of weighed hypotheses: * 40% chance that the blob is *here* and *this* would be how to insert a counterfactual in its place * 25% chance that it's *there* instead, and *that* would be how to insert a counterfactual in its place * etc… "filtering for causality" would mean that, given one specific weighed location hypothesis for `x` and another for `y`, we can get an answer — either a boolean or a degree of confidence — about whether `x<y` or not. thus, if we require that `x<y`, we can either rule out or reduce the confidence in pairs of weighed location hypotheses that don't verify `x<y`. i tentatively draw up such a potential blob-locating scheme [here](rough-sketch-formal-aligned-ai.html) using "carving functions" `C`, but they might not be super well explained, they seem wonky, and they don't filter for causality. we would want the signal from the set of correct hypotheses to outweigh incorrect hypotheses. there are two different shapes for failure to correctly locate blobs: * "naive" mislocations, where the hypotheses don't really capture what we want and and inserting a counterfactual would result in garbled results * *adverserial* mislocations, where some agent — such as the AI trying to make its own job easier, or remote aliens — is arranging their future lightcone to grab as much as they can of the mass of blob location hypotheses, and through those manipulate the behavior of our AI that uses them. the latter case is particularly important: we need the AI to be unable to hijack the question-answer interval even if it fills *almost its entire lightcone* with fake but maximally-easily-locatable question-answer intervals. we need to go: "well, in most of the probability mass, the first instance of the AI is *here*, and so we'll only look at question-answer intervals *before that*". because we care about "the *first* instance of the AI", sticking to solomonoff induction (where each world is its own program) rather than taking [the entire universal program](all-claw-no-world.html) seems like it might make sense. when it comes to inserting counterfactuals, it starts to seem like it would matter a whole lot whether the pattern being injected is (from the point of view of the AI being launched) is more (quantum-)random like `q` or more deterministic like `G`, because if the question `q` is quantum-random, then "injecting the counterfactual" might simply look like "find the other many-worlds timeline where `q` was a different string instead". it even seems like such a blob-locating framework *might* be experimentable with to some extent.
52b889c9-04ad-48bc-ae41-6128ead2a16a
StampyAI/alignment-research-dataset/eaforum
Effective Altruism Forum
Is anyone else also getting more worried about hard takeoff AGI scenarios? Hi, this is my first post and I apologize if this question is too subjective, in which case I'll take it down.  Ok here goes: I'm personally starting to feel an accelerating, slightly visceral sense of fear at the increasing pace of news about AI breakthroughs that seem mere years from causing mass unemployment among white collar and blue collar workers alike (everything from automated artistry to automated burger-making).  My wife & I have been incredibly blessed with two adorable toddlers so far, and if they eat healthily, exercise, and benefit from the arrival of regenerative medical technology such as stem cell therapies, it seems quite reasonable that they'll live for at least 110 years if not much more (I hope even 1,000's of years at least).  Even taking the base case as 110 years, it seems a near-certainty that a transformative and super-dangerous AGI Singularity or Intelligence Explosion will occur while they are alive. Since I obviously deeply love our kids, I think about this a lot, and since I work in this field and am well-aware of the risks, I tend to think that the Singularity is the #1 or #2 threat to my young children's lives, together with nuclear war.   I also can't help but wonder what jobs they will be able to find on the job market that aren't yet taken over by AI, by the time they graduate from college in 20 years or more. I wish my fears were unfounded, but I'm well acquainted with the various dangers of both x-risks and s-risks associated with unaligned, hacked, or corrupted AGI.  I help run a startup called Preamble which works to reduce AGI s-risk and x-risk, and as part of our civic engagement efforts I've spent some years working with folks in the US military to raise awareness about AGI x-risks, especially those associated with 'Skynet' systems (hypothetical systems called Nuclear Command Automation systems, which would be deeply stupid to ever build, even for the nation that built them).  The author of the following article, Prof. Michael Klare, is a good friend, and he sought my advice while he was planning this piece, so it represents a good synthesis of our views: <https://www.armscontrol.org/act/2020-04/features/skynet-revisited-dangerous-allure-nuclear-command-automation>  He and I, along with other friends and allies of ours, have recently been grateful to see that some of our multi-year, long-shot civic engagement efforts have borne fruit!   Most exciting are these two US government statements:    (1)  In March 2021, the National Security Commission on AI (NSCAI) included a couple lines in their official Report to Congress which, for the first time, briefed Congress about the importance of value alignment technology as a field of technology, and one which the US should invest in as a way to reduce AGI risk:  "Advances in AI, including the mastery of more general AI capabilities along one or more dimensions, will likely provide new capabilities and applications. Some of these advances could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to assure that systems are aligned with goals and values, including safety, robustness and trustworthiness. The US should monitor advances in AI and make necessary investments in technology and give attention to policy so as to ensure that AI systems and their uses align with our goals and values."    (2)  In Oct 2022, the Biden administration's 2022 Nuclear Posture Review (NPR) became the first ever statement by the US Federal government explicitly prohibiting any adoption of Nuclear Command Automation by the US:  "In all cases, the United States will maintain a human “in the loop” for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment." I'm extremely grateful that the US has finally banned Skynet systems!  Now, we at Preamble and in the arms control community are trying to find allies within China so as to convince them to make a similar ban of Skynet systems in their jurisdiction.  That would also open the door for our nations to have a dialogue on how to avoid being tricked into going to war, by an insane terrorist group using cyberattacks and misinformation to cause what is called a Catalytic Nuclear War (a war that neither side wanted, that was caused by trickery from a 3rd "catalytic" party).  https://mwi.usma.edu/artificial-intelligence-autonomy-and-the-risk-of-catalytic-nuclear-war/ All of us in the AGI safety community are working hard to prevent bad outcomes, but it feels like the years are starting to slip away frighteningly quickly on what might be the wick of the candle of human civilization, if we don't get 1,000 details right to ensure everything goes perfectly according to plan when the superintelligence is born.  Not only do we have to solve AI alignment, but we also have to perfectly solve software and hardware supply chain security; otherwise we can't trust the software to actually do what the pixels on the screen say that the source code says.  http://www.cs.cmu.edu/~rdriley/487/papers/Thompson\_1984\_ReflectionsonTrustingTrust.pdf I'm sorry if I'm rambling but I just wanted to convey an overall sense and impression of an emotion and see if others were feeling the same.  I dread that our civilization is hurtling at 100MPH towards an impassable cliff, and it's starting to give me a sense of visceral fear.  It really does seem like OpenAI, and the companies they are inspiring, are flooring the gas pedal and I was just wondering if anyone else is feeling scared.  Thank you.
2efdb49e-9014-44c2-b5fd-7a8fdae8e533
trentmkelly/LessWrong-43k
LessWrong
March 2012 Media Thread There was a recent discussion considering the idea of a monthly Book (later expanded to movies, links, etc) thread. The poll was pretty unanimous that this was A Good Idea (tm). The past two threads have had a decent amount of activity, so let's keep going. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! Rules: * Please avoid downvoting; this is a thread for sharing subjective experiences, and people should feel comfortable posting their personal opinion without fearing a karma backlash. If you disagree with a person's recommendation, please post a comment to that effect. * If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations. * Please use the comment trees, which I was apparently too dumb to do.
0618cecf-0dd1-4629-8761-b05776976aae
trentmkelly/LessWrong-43k
LessWrong
FAI FAQ draft: What is nanotechnology? I invite your feedback on this snippet from the forthcoming Friendly AI FAQ. This one is an answer to the question "What is nanotechnology?" For references, see here. _____   Nanotechnology is the study of materials and devices built at the scale of 1-100 nanometers (“nano-” means “one billionth of”). A hydrogen atom is about 0.24nm across, so we’re talking about materials and devices built atom by atom. One famous piece of nanotechnology is the carbon nanotube. A carbon nanotube is a one-atom-thick sheet of graphite that is rolled into a seamless tube. Because of their physical properties, carbon nanotubes usually allow ballistic conduction, meaning that electrons can flow through the tube without collisions (Lin & Shung 1995), which allows the carbon nanotubes to conduct electricity without heat dissipation (Chico et al. 1996)! Carbon nanotubes are also much stronger than diamond or steel (Popov et al. 2002). Easton Bell Sports uses carbon nanotubes to build tougher bicycles, doctors use carbon nanotubes as scaffolding for bone growth in tissue engineering applications (Zanello et al. 2006), and one company uses carbon nanotubes to produce a special kind of high-conductance heater. New nanomaterials are being developed every year, and may see applications in nearly every field of technology (Allhoff et al. 2010). Nanotechnology has already given us stain-free pants, larger-capacity hard drives, stronger cement, longer-lasting tennis balls, the world’s first sale of a quantum computer, a new method for fighting cancer, and much more. An even more radical technology was described in Eric Drexler’s (1987) Engines of Creation. As Allhoff et al. (2010, p. 7) explain, Drexler predicted > a new form of technology based on molecular “assemblers,” which would be able to “place atoms in almost any reasonable arrangement” and thereby allow the formation of “almost anything the laws of nature allow.” This may sound like a fanciful and fantastical idea but, as Drexler
e320faea-9ceb-463d-908b-668078136773
trentmkelly/LessWrong-43k
LessWrong
Introducing Mnosis Introducing Mnosis The Guild of the ROSE is proud to announce the beta launch of a new self-improvement tool: Mnosis. > mne-, root. Mind, memory. > > Gnosis. Knowledge. What is Mnosis? In a nutshell -- a magic deck of flashcards. Think of it as Anki for predefined subjects. Instead of manually creating a deck of cards by hand, Mnosis comes preloaded with all the cards needed to train your skills in a particular area. As you click through cards, Mnosis adjusts itself to fit your mental profile. Mnosis is designed to use your time optimally, showing you the exact card you need to see to advance your mastery without boring or overwhelming you. It also tracks a complex web of dependencies between cards. If you get one card wrong, Mnosis will show you cards covering related concepts. This mimics the process of real learning, where a student comes at a topic from multiple angles until they understand it. Mnosis currently has the following modules: * Math. Master mental arithmetic from 1+1=2 through 254*29=7336 * Organic Chemistry. Effortlessly translate the IUPAC names for organic molecules and ace the OChem college weed-out courses. * Chemistry. Memorize the atomic symbols and numbers. * Probability. Master the basic definitions and concepts of probability theory. The long-term goal of Mnosis is to solve learning, reducing mastery of complex subjects to a simple (for the user) process. As a result, the underlying algorithms undergo constant improvement and each module is continually tweaked and extended. Additional Modules A fifth module -- music -- is on the roadmap. Stay tuned for more information. Mnosis was designed from the ground up to be highly extensible. New modules are easy to create. If you would like to see a specific module added to Mnosis, contact Matt Freeman: moridinamael#1693 on Discord. Access Mnosis is currently available with a free Guild account. After registering, you'll be able to access the Mnosis page.
189eb501-5b98-4112-b928-fa27a0d3f078
trentmkelly/LessWrong-43k
LessWrong
Meetup : West LA Meetup - Likelihood Ratios Discussion article for the meetup : West LA Meetup - Likelihood Ratios WHEN: 06 June 2012 07:00:00PM (-0700) WHERE: 10850 West Pico Blvd, Los Angeles, CA 90064 When: 7:00pm - 9:00pm Wednesday, June 6th. Where: The Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. Parking is free for 3 hours. Discussion Topic: This week will be all about gaining a useful intuition for Bayes' Theorem. We're going to focus on the formalization with odds and likelihood ratios. If you are already familiar with the concept of likelihood ratios, I recommend you read this article. If you have no idea what I'm talking about, read that Bayes' Theorem post I linked and, if you have time, check out one of the blog posts or external links from the article. But don't worry if you don't have time to read any articles, or even if you've never read any Less Wrong! Bring a friend! The atmosphere is casual, and good, intelligent conversation with friendly people is guaranteed. I will bring a whiteboard with Bayes' Theorem written on it. P.S. One of our routines is to ask if anyone has any predictions they would like to commit. Feel free to organize your thoughts ahead of time! Discussion article for the meetup : West LA Meetup - Likelihood Ratios
21e77b75-5a52-4b86-a01c-5604ad417919
trentmkelly/LessWrong-43k
LessWrong
Philosophy Needs to Trust Your Rationality Even Though It Shouldn't Part of the sequence: Rationality and Philosophy > Philosophy is notable for the extent to which disagreements with respect to even those most basic questions persist among its most able practitioners, despite the fact that the arguments thought relevant to the disputed questions are typically well-known to all parties to the dispute. Thomas Kelly > The goal of philosophy is to uncover certain truths... [But] philosophy continually leads experts with the highest degree of epistemic virtue, doing the very best they can, to accept a wide array of incompatible doctrines. Therefore, philosophy is an unreliable instrument for finding truth. A person who enters the field is highly unlikely to arrive at true answers to philosophical questions. Jason Brennan   After millennia of debate, philosophers remain heavily divided on many core issues. According to the largest-ever survey of philosophers, they're split 25-24-18 on deontology / consequentialism / virtue ethics, 35-27 on empiricism vs. rationalism, and 57-27 on physicalism vs. non-physicalism. Sometimes, they are even divided on psychological questions that psychologists have already answered: Philosophers are split evenly on the question of whether it's possible to make a moral judgment without being motivated to abide by that judgment, even though we already know that this is possible for some people with damage to their brain's reward system, for example many Parkinson's patients, and patients with damage to the ventromedial frontal cortex (Schroeder et al. 2012).1 Why are physicists, biologists, and psychologists more prone to reach consensus than philosophers?2 One standard story is that "the method of science is to amass such an enormous mountain of evidence that... scientists cannot ignore it." Hence, religionists might still argue that Earth is flat or that evolutionary theory and the Big Bang theory are "lies from the pit of hell," and philosophers might still be divided about whether somebody can make
75f5ee21-d18c-466d-85dc-59d8af46cc8c
awestover/filtering-for-misalignment
Redwood Research: Alek's Filtering Results
id: post3390 In my last post , I argued that interaction between the human and the AI system was necessary in order for the AI system to “stay on track” as we encounter new and unforeseen changes to the environment. The most obvious implementation of this would be to have an AI system that keeps an estimate of the reward function. It acts to maximize its current estimate of the reward function, while simultaneously updating the reward through human feedback. However, this approach has significant problems. Looking at the description of this approach, one thing that stands out is that the actions are chosen according to a reward that we know is going to change. (This is what leads to the incentive to disable the narrow value learning system.) This seems clearly wrong: surely our plans should account for the fact that our rewards will change, without treating such a change as adversarial? This suggests that we need to have our action selection mechanism take the future rewards into account as well. While we don’t know what the future reward will be, we can certainly have a probability distribution over it. So what if we had uncertainty over reward functions, and took that uncertainty into account while choosing actions? Setup We’ve drilled down on the problem sufficiently far that we can create a formal model and see what happens. So, let’s consider the following setup: The human, Alice, knows the “true” reward function that she would like to have optimized. The AI system maintains a probability distribution over reward functions, and acts to maximize the expected sum of rewards under this distribution. Alice and the AI system take turns acting. Alice knows that the AI learns from her actions, and chooses actions accordingly. Alice’s action space is such that she cannot take the action “tell the AI system the true reward function” (otherwise the problem would become trivial). Given these assumptions, Alice and the AI system act optimally. This is the setup of Cooperative Inverse Reinforcement Learning (CIRL). The optimal solution to this problem typically involves Alice “teaching” the AI system by taking actions that communicate what she does and does not like, while the AI system “asks” about parts of the reward by taking actions that would force Alice to behave in different ways for different rewards. Does this solve our problems? Two of the problems we identified in the last post are simply assumed away: Alice does not know the “true” reward function, but we assumed that she does. Alice may be unable to optimally give feedback to the AI system, but we assume that she is optimal here. So this particular kind of reward uncertainty does not fix either of these problems. What about convergent instrumental subgoals ? Utility preservation. One major worry we had with the original setup was that the AI system would disable its narrow value learning system, as a manifestation of the instrumental goal of protecting its utility function. This is reversed in our setup: the AI system has a positive incentive to continue doing narrow value learning, since it helps it hone in on the true reward function, which in turn allows it to optimize the reward better. (We might worry that this prevents us from fixing any problems in the narrow value learning system, but that is a robustness problem: in the world where everything is working correctly, this is the correct incentive.) Survival incentive, aka shutdown incorrigibility. Another worry is that the AI system has a survival incentive that causes it to prevent us from shutting it down. With reward uncertainty, the fact that we are trying to shut the AI system down is itself strong evidence about the reward function. The AI system should reason that its operation leads to worse outcomes, and so allow itself to be turned off. The Off-Switch Game formalizes this reasoning in a simple setting. The AI system can either directly take action a (perhaps by disabling an off switch), can shut down, or can allow Alice to choose between these options. If the AI system allows Alice to choose, Alice then decides whether or not to shut down the AI system. Assuming that Alice is optimal, the AI system reasons that when Alice chooses to shut it down, the true reward function must have been one which makes the action a bad, and so it does better by deferring to Alice. However, when Alice is modeled as noisily rational instead of optimal, the AI system might reason that Alice might make a mistake when deciding to shut the AI system down, and so it might take action a directly without deferring to her. So, the AI system becomes shutdown corrigible, as long as it assumes that Alice is sufficiently rational. Should robots be obedient? makes a similar point, arguing that an AI system that learns preferences and then acts to maximize their satisfaction can perform better than an AI system that simply obeys instructions, because humans are not perfectly rational. This creates a tradeoff between performance and obedience (which shutdown corrigibility is an instance of). Of course, these simple models exclude many actions that a realistic AI system could take. In particular, it seems likely that an AI system would prefer to disable the shutdown button, gather information about the reward until it has fully updated , and optimize the resulting set of rewards. If the space of reward functions is misspecified, as it likely will be, this will lead to bad behavior. (This is the point made by Incorrigibility in the CIRL Framework .) Note though that while this cuts against shutdown corrigibility (since the AI system would prefer to disable the shutdown button), I would frame the problem differently. If the space of rewards is well-specified and has sufficient weight on the true reward function and the AI system is sufficiently robust and intelligent, then the AI system must update strongly on us attempting to shut it down. This should cause it to stop doing the bad thing it was doing. When it eventually narrows down on the reward it will have identified the true reward, which by definition is the right thing to optimize. So even though the AI system might disable its off switch, this is simply because it is better at knowing what we want than we are, and this leads to better outcomes for us. So, really the argument is that since we want to be robust (particularly to reward misspecification), we want shutdown corrigibility, and reward uncertainty is an insufficient solution for that. A note on CIRL There has been a lot of confusion on what CIRL is and isn’t trying to do, so I want to avoid adding to the confusion. CIRL is not meant to be a blueprint for a value-aligned AI system. It is not the case that we could create a practical implementation of CIRL and then we would be done. If we were to build a practical implementation of CIRL and use it to align powerful AI systems, we would face many problems: As mentioned above, Alice doesn’t actually know the true reward function, and she may not be able to give optimal feedback. As mentioned above, in the presence of reward misspecification the AI system may end up optimizing the wrong thing, leading to catastrophic outcomes. Similarly, if the model of Alice’s behavior is incorrect, as it inevitably will be, the AI system will make incorrect inferences about Alice’s reward, again leading to bad behavior. As an example that is particularly easy to model, should the AI system model Alice as thinking about the robot thinking about Alice, or should it model Alice as thinking about the robot thinking about Alice thinking about the robot thinking about Alice? How many levels of pragmatics is the “right” level? Lots of other problems have not been addressed: the AI system might not deal with embeddedness well, or it might not be robust and could make mistakes, etc. CIRL is supposed to bring conceptual clarity to what we could be trying to do in the first place with a human-AI system. In Dylan’s own words , “what cooperative IRL is, it’s a definition of how a human and a robot system together can be rational in the context of fixed preferences in a fully observable world state”. In the same way that VNM rationality informs our understanding of humans even though humans are not expected utility maximizers, CIRL can inform our understanding of alignment proposals, even though CIRL itself is unsuitable as a solution to alignment. Note also that this post is about reward uncertainty, not about CIRL. CIRL makes other points besides reward uncertainty, that are well explained in this blog post , and are not mentioned here. While all of my posts have been significantly influenced by many people, this post is especially based on ideas I heard from Dylan Hadfield-Menell. However, besides the one quote, the writing is my own, and may not reflect Dylan’s views.
a2303484-48f6-4a35-90ab-1e4d1f2701b1
trentmkelly/LessWrong-43k
LessWrong
Moving on from community living After 7 years at Deep End (and 4 more years in other group houses before that), Janos and I have moved out to live near a school we like and some lovely parks. The life change is bittersweet - we will miss living with our friends, but also look forward to a logistically simpler life with our kids. Looking back, here are some thoughts on what worked and didn't work well about living in a group house with kids. Pros. There were many things that we enjoyed about living at Deep End, and for a long time I couldn't imagine ever wanting to leave. We had a low-effort social life - it was great to have spontaneous conversations with friends without arranging to meet up. This was especially convenient for us as new parents, when it was harder to make plans and get out of the house, particularly when we were on parental leave. The house community also made a huge difference to our wellbeing during the pandemic, because we had a household bubble that wasn't just us.  We did lots of fun things together with our housemates - impromptu activities like yoga / meditation / dancing / watching movies, as well as a regular check-in to keep up on each other's lives. We were generally more easily exposed to new things - meeting friends of friends, trying new foods or activities that someone in the house liked, etc. Our friends often enjoyed playing with the kids, and it was helpful to have someone entertain them while we left the living room for a few minutes. Our 3 year old seems more social than most kids of the pandemic generation, which is partly temperament and partly growing up in a group house.  Cons. The main issue was that the group house location was obviously not chosen with school catchment areas or kid-friendly neighbourhoods in mind. The other downsides of living there with kids were insufficient space, lifestyle differences, and extra logistics (all of which increased when we had a second kid). Our family was taking up more and more of the common space - the living roo
dc221f1d-0b0e-437e-8b36-dd0ccf0dbef1
trentmkelly/LessWrong-43k
LessWrong
I've had it with those dark rumours about our culture rigorously suppressing opinions You folks probably know how some posters around here, specifically Vladimir_M, often make statements to the effect of:   "There's an opinion on such-and-such topic that's so against the memeplex of Western culture, we can't even discuss it in open-minded, pseudonymous forums like Less Wrong as society would instantly slam the lid on it with either moral panic or ridicule and give the speaker a black mark. Meanwhile the thought patterns instilled in us by our upbringing would lead us to quickly lose all interest in the censored opinion" Going by their definition, us blissfully ignorant masses can't even know what exactly those opinions might be, as they would look like basic human decency, the underpinnings of our ethics or some other such sacred cow to us. I might have a few guesses, though, all of them as horrible and sickening as my imagination could produce without overshooting and landing in the realm of comic-book evil: - Dictatorial rule involving active terror and brutal suppression of deviants having great utility for a society in the long term, by providing security against some great risk or whatever. - A need for every society to "cull the weak" every once in a while, e.g. exterminating the ~0.5% of its members that rank as weakest against some scale. - Strict hierarchy in everyday life based on facts from the ansectral environment (men dominating women, fathers having the right of life and death over their children, etc) - Mencius argued in favor of such ruthless practices, e.g. selling children into slavery, in his post on "Pronomianism" and "Antinomianism", stating that all contracts between humans should rather be strict than moral or fair, to make the system stable and predictable; he's quite obsessed with stability and conformity. - Some public good being created when the higher classes wilfully oppress and humiliate the lower ones in a ceremonial manner - The bloodshed and lawlessness of periodic large-scale war as a vital "pressure valve"
55fdab69-b08a-47d5-b5aa-93dbf54f1878
trentmkelly/LessWrong-43k
LessWrong
Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts? Authors: Sohee Yang, Nora Kassner, Elena Gribovskaya, Sebastian Riedel, Mor Geva. Abstract: > We evaluate how well Large Language Models (LLMs) latently recall and compose facts to answer multi-hop queries like "In the year Scarlett Johansson was born, the Summer Olympics were hosted in the country of". One major challenge in evaluating this ability is that LLMs may have developed shortcuts by encounters of the head entity "Scarlett Johansson" and the answer entity "United States" in the same training sequences or merely guess the answer based on frequency-based priors. To prevent shortcuts, we exclude test queries where the head and answer entities co-appear in pretraining corpora. Through careful selection of relations and facts and systematic removal of cases where models might guess answers or exploit partial matches, we construct an evaluation dataset SOCRATES (ShOrtCut-fRee lATent rEaSoning). We observe that LLMs demonstrate promising latent multi-hop reasoning abilities without exploiting shortcuts, but only for certain types of queries. For queries requiring latent recall of countries as the intermediate answer, the best models achieve 80% latent composability, but this drops to just 5% for the recall of years. Comparisons with Chain-of-Thought composability highlight a significant gap between the ability of models to reason latently versus explicitly. Analysis reveals that latent representations of the intermediate answer are constructed more often in queries with higher latent composability, and shows the emergence of latent multi-hop reasoning during pretraining. I've only skimmed for now, but seems relevant to Chain-of-Thought alignment and out of context reasoning (OOCR) as a threat model.  
7c723e57-8d93-4963-94d6-2f9c30b9c4b6
trentmkelly/LessWrong-43k
LessWrong
Seeing Red: Dissolving Mary's Room and Qualia Essential Background: Dissolving the Question How could we fully explain the difference between red and green to a colorblind person? Well, we could of course draw the analogy between colors of the spectrum and tones of sound; have them learn which objects are typically green and which are typically red (or better yet, give them a video camera with a red filter to look through); explain many of the political, cultural and emotional associations of red and green, and so forth... but it seems that the actual difference between our experience of redness and our experience of greenness is something much harder to convey. If we focus in on that aspect of experience, we end up with the classic philosophical concept of qualia, and the famous thought experiment known as Mary’s Room1. Mary is a brilliant neuroscientist who has been colorblind from birth (due to a retina problem; her visual cortex would work normally if it were given the color input). She’s an expert on the electromagnetic spectrum, optics, and the science of color vision. We can postulate, since this is a thought experiment, that she knows and fully understands every physical fact involved in color vision; she knows precisely what happens, on various levels, when the human eye sees red (and the optic nerve transmits particular types of signals, and the visual cortex processes these signals, etc). One day, Mary gets an operation that fixes her retinas, so that she finally sees in color for the first time. And when she wakes up, she looks at an apple and exclaims, "Oh! So that's what red actually looks like."2 Now, this exclamation poses a challenge to any physical reductionist account of subjective experience. For if the qualia of seeing red could be reduced to a collection of basic facts about the physical world, then Mary would have learned those facts earlier and wouldn't learn anything extra now– but of course it seems that she really does learn something when she sees red for the first time. This i
a0b47826-7a97-4778-a825-790be9abfe7d
trentmkelly/LessWrong-43k
LessWrong
Monthly Roundup #18: May 2024 As I note in the third section, I will be attending LessOnline at month’s end at Lighthaven in Berkeley. If that is your kind of event, then consider going, and buy your ticket today before prices go up. This month’s edition was an opportunity to finish off some things that got left out of previous editions or where events have left many of the issues behind, including the question of TikTok. OH NO All of this has happened before. And all of this shall happen again. > Alex Tabarrok: I regret to inform you that the CDC is at it again. > > Marc Johnson: We developed an assay for testing for H5N1 from wastewater over a year ago. (I wasn’t expecting it in milk, but I figured it was going to poke up somewhere.) > > However, I was just on a call with the CDC and they are advising us NOT to use it. > > I need a drink. > > They say it will only add to the confusion because we won’t know where it is coming from. I’m part of a team. I don’t get to make those decisions myself. > > Ben Hardisty: The usual institute, or did they have a good reason? > > Marc Johnson: They say it would only add to the confusion since we don’t know precisely where it is coming from. But then they said 2 minutes later that they aren’t sure this isn’t just regular influenza appearing late. We can answer that, so why don’t we??? I don’t get it. > > Alex: Are your team members considering bucking the CDC advice or has the decision been made to acquiesce? I understand them not wanting panic but man if that’s not self serving advice I don’t know what is. > > Marc Johnson: The CDC will come around. > > ZzippyCorgi11: Marc, can private entities ask you to test wastewater around their locations? Is the CDC effectively shutting down any and all testing of wastewater for H5N1? > > Marc Johnson: No, if people want to send me wastewater I can test them with other funding. I just can’t test the samples I get from state surveillance. > > JH: This is ridiculous. Do it anyway! > > Marc Johnson: It’s
e733ff42-353b-4196-9526-c187bd38750f
StampyAI/alignment-research-dataset/youtube
Youtube Transcripts
EleutherAI Interpretability Reading Group 220212: Interpreting across time said uh there's a couple people who are gonna be presenting today from introverts from the interpreting across time channel and so i'm gonna start by giving a bit of an overview of what we're what we're up to and what we're hoping to accomplish um and then igor has some preliminary results that he's going to present and i have some preliminary results that i'm going to present and then we're going to talk about kind of where where we would like to end up um for both the current phase of research as well as kind of our longer term plans because this is something i hope to be working on for quite a while so the origin of this of this project is that there is a lot of papers that in interpretability that study like for example the knowledge in neurons paper uh the example depth paper that study uh interpretability concepts and in particular kind of like how and where information is is represented or capacities are found within a neural network but don't really look at that in a in in the context of how the network changes during training a really good example of the work that i that i want us to be doing is the alpha fold paper which i know there's a presentation on a couple weeks ago so i wasn't able to to make that one uh where they had these really wonderful plots uh that they called what where when plots actually i should just pull these up to give an example uh oh sorry i said alpha folder let's say uh alpha zero that's right yep uh acquisition of chess knowledge and alpha zero so let me show my screen briefly um so i can provide this as an example to people who don't know what i'm talking about in case they're already here excellent you can see my screen we can yep so in the in the paper so acquisition of structural knowledge in alpha zero is a study in al the alpha zero uh chess engine develops its ability to play chess and um kind of at what stages of training uh different capabilities uh exist um so one of the one of the cruxes of this paper uh are these plots that they call what where when plots and so you have three axes here one of them is so these are these are uh classification they're taking a part of the of the network and they are testing whether or not it can just it can determine uh whether a particular property is holds so for example here we have uh in-check uh has contested open file can capture opponent's queen has a mate threat there's also ones that are slightly less categorical so uh for example these ones are about the material imbalance the numerical score that people assign to positions based on the fact that i've captured my opponent's rook and my opponent has captured two pawns in one night and you know the extent to which that is balanced or imbalanced as a as a set of traits and so what you see so these are all heat maps um and what you're seeing here is kind of three axes so you have training steps uh so zero is the untrained network um and then ten to the sixth is the number of steps that they've trained their their networks for uh block is is roughly synonymous at least in the context of language models uh comparable to uh the number of layers uh deep within the network so you have a large network and similar to some of the probing work uh we look at kind of the the first section or the first couple couple layers of the network and we're interested in evaluating the the accuracy and kind of what they found is that they're able to map out in this pretty uh compelling fashion both at how how where in the network's depth uh certain abilities arise as well as um you know how how much training you need to do to get them and they they do all sorts of analysis analyses with these plots and correlations um and one of you know one of the things that i think is really interesting for example you can see in uh plot a uh where they're measuring kind of the the engine's ability to estimate the score which is a proxy for how likely white is to win in a position um there's there's this really nice gradient that you see across training steps uh where it's ability to do this increases significantly and then plateaus it uh after about they train for so they train for after about ten to the fifth steps for the next 90 of the training um its capabilities in this regard don't really go up and so it's kind of plateaued and you don't need more than that in order to get really good performance uh what checking whether or not the engine knows it's in check that arises even earlier a performance kind of plateaus at like 10 to the fourth steps despite the fact they're doing 10 to the sixth steps of training and then there are other examples um here has a mate threat is slightly non-monotonic um and there there's all sorts of interesting things you can see about these landscapes this this one is a complicated concept that the engine just never really develops very robustly and uh so you know i i think that these plots are fabulous and i wish that a lot of the interpretability research that people do with neural networks uh and specifically with language models looked more like this because i think that this is really fabulous and in the context of language models i'm interested not only in how these patterns in language understanding arise across training but also how would the the patterns across training change as you scale so for example it is conceivable that um you know just the stick with the chest analogy for example it is conceivable that if you were to multiply the size of the neural network by 10 um the some of these for example in check might uh not really change at all you need a certain amount of exposure to data to develop the concept and it's pretty easy to develop once you've seen the data whereas as a proportion for for how early early in training um checking whether there's a mate thread arises could change dramatically i mean you know as the neural network gets larger has it has more capacity and it's able to develop this faster because uh in fewer steps um as you scale up and i think that doing that kind of an analysis would be really fascinating and really valuable to interpretability research unfortunately nobody's really doing that for a couple different reasons including the fact that there aren't good pre-trained sets of models to to do this on so big picture what our project is about is studying this kind of phenomena though more in the context of language uh and that is not one of them um sorry you can't talk and think at the same time that's good take it down uh and so we've we've identified yeah so we want to train a set of models that are consistently trained and consistently checkpointed like what they did in for example the scaling laws papers um but release the trained models as well as all the partially trained checkpoints and kind of use these to power analyses of interpretability i'm sure that they will be useful for things that aren't interpretability research as well um but kind of the the big motivation from my point of view is one of the examinations from my point of view is studying um interpretability um with it yes so we have a we have a couple experiments that we uh so there are two pieces to i guess phase one i'll call it of this project uh one of them is to simply like train this this uh suite of models that can be used to to do these kinds of analyses and then step and then the other part of phase one is to put together a diverse and compelling set of experiments to use to show people look at all the cool stuff that we can learn um and so we've we've gone and talked about a bunch of different kinds of experiments we've identified a couple that we are going to start off with um and igor is uh is running one of them um and he's gonna he's gonna tell you tell us about some of the preliminary results that he's found there and then um i've been doing some stuff with memorization uh which along with um yeah words sorry i'm going to be doing some i have been doing some stuff uh with memorization alongside wars and i am going to be presenting uh that after igor talks about the the really cool probing stuff that he's been doing um yeah so i think that largely covers it you want to take it away sounds great for sure thanks stella can uh can you hear me can everyone see hear me and see my screen we can all right that's uh great um yeah i've talked to some of you in on various channels and threads but i'll just take you know 10 seconds to introduce myself i've i'm from engineering product building background i spent last seven years at pure storage building a data storage product and i'm currently collaborating with sarah hill ventures at uh founding a next startup in in ai primarily with focus on b2b space so i'm spending half of my time looking at the latest and greatest technologies in ai and really trying to you know have have a have my hand on the pulse there and and the other half of my time looking at old crusty enterprise applications and trying to figure out how to how do we pair the two together and and and and built uh an interesting company in in that space and so so as a part of the the ai research part i've been spending some time uh with elusive ai i've really appreciated that i can i can drop in and uh if i'm you know if i'm i'm can join projects and and help out with things i certainly appreciate uh you all having me here and uh tolerating my presence uh so so uh let me uh let me talk about some of the things that that we've looked into as a part of the intro interportability across time research track so um can folks still see my screen i'm not i'm not quite used to this uh maybe i need to switch back to the setup but i'll figure it out so uh in the what when where plots uh can't both still hear me yep okay okay okay all right all right okay so in the wear plots uh one of the things that you need to be able to do is look across layers so you need to be able to probe across depth uh cooking glass yourself please sorry no okay that was what was confusing i was like what's going on uh so so one of the things that you need in order to do to look uh at interportability is to look across depth like what's happening at different layers in the transformer and the first like the most basic question is you know can we extract something out of a given layer in gpt dox or any other language model and so that's that's kind of the first question and for those of you who have been following any of the interpretability discussion you you probably would have seen what it lands or or you would know that like dropping layers uh means that the model still continues to work so um so this tells us that the prediction of the next token is present at lower layers of the of the transformer and most algebras did a very nice thorough examination of this of this concept and we could easily spend an hour just talking about the logic lens but uh in terms of what happens with with the model i kind of tried to illustrate it on on the left side in terms of in in in terms of the components of gpt neox and i'll use this format on subsequent slides as well so so basically what we're looking at here is we we just take the model and we only keep l layers right so we drop the top layers and that gives us a next that that basically ends up producing a model that still works it generates a next token prediction uh and we can you know we can then let's say look at what's the uh what's the perplexity on wiki text or what's the what's the law clause on wiki text and so if you look on the on the graph that i'm showing here here we're looking at perplexity on wiki text as examined by logic ones and we actually started zero so you can you can do logic lens prediction straight out of the input embedding um interestingly it depends whether if the embedding and unembedding are tied like they're in gpt2 it doesn't end up working because because in that case logistics just ends up projecting back the input but on gpt you know on gpt neo x you can you can use logic lines to get the prediction out of every layer which is that's super interesting now the next interesting question is logic let's shows us that the next token prediction is present in earlier layers now one question is like is logic lens the unembedding matrix the optimal projection so if we take the unembedding matrix is it the optimal way to project the next token predictions out of earlier layers or are there is there a better prediction and if there's even a better prediction how much better is it is a little bit better is it way better so that was kind of the question that we started started with that was uh this stella guided me down this general path to go explore and uh what the the way and i mean i've looked at this problem in a few different ways and though the way i approach this is to add an extra linear projection in front of the unembedding layer so basically instead of using the unemitting unembedding matrix to project the output of a given layer we effectively fine-tune the unembedding i've done this through a few different variants i've done this by retraining the final linear matrix i've done it by putting an extra uh linear transformation in front of the uh unembedding matrix and just fine tuning that and you end up getting the same thing in in all the different variations and so you get the second curve the i guess orange dashed curve that shows that actually you can get a much better production out of the residual stream compared to what you observe with logic lens and so this demonstrates that there is a much better prediction uh than what you might think from from logic lens now one of the i think the key part of the i guess thought process i'll be going through is um okay is the orange curve like overfitting in some way or is it really demonstrating to what extent it's really demonstrating knowledge that is present in that layer like to what extent is it fair to look at the orange lake orange line and say that hey this is what the the model knows at a particular layer um and so this will be something that will get you both from like empirical and theoretical side um but one thing that's a quick question do you have any notion of the variability for each of those points so like how much variance is around the blue points as opposed to the orange points variability with respect to just the performance overall so like how sure the average performance is better for orange than blue but is there just a lot of variability at different layers or does the variability change or so in order to understand kind of the yeah so so the or the orange line you know can be obtained in various uh in in through various methodologies i've tested this across various model sizes i've just tested across various um evaluation sets so here here i've trained the extra linear layer on pile and i'm evaluating on wiki text i've looked at evaluating enron emails i've looked at evaluating at other other data sets both in both that uh measure perplexity and also that measure other scores like lambda and and so so forth so the general results that i'm showing i've validated in by varying a number of different axes yeah i actually didn't mean that as a skeptical question it's more trying to understand more just kind of what the distributions of those points look like so that maybe let's say that for ninety percent of the words you get a big difference in these things but um for ten percent of the words the the performance is just as bad as i don't know random guessing or something for most layers you know you could see different characters of different behaviors at different layers if you looked at variability yeah so what i'll have later on is i'll have a i'll have some i'll have an example so we can look at a concrete example of what's a prediction that actually is generated by a logic lens versus the uh versus the extra linear layer so so let's get to that and then if it's see if it's helpful and again we can uh you know take it from there uh but so so one point that i did want to make is uh if you look at i mean the difference between logic lens and the and the fine-tuned and the tuned lens is pretty huge like it's it's something like um you know eight layers in in the most in from like layer zero to layer eight and so uh sure you you have to believe that what the true knowledge of the model is to the prediction that such such a thing exists has to be much closer to the extra linear line compared to the logic ones because we've really only added uh one matrix that's like a million parameters you know and then embedding space squared so we've added relatively few parameters and we got i got a prediction that's so much better so even the most pessimistic interpretation like you would have to imagine the true knowledge of the of the transformer is maybe like a quarter of the layer to the right of what the orange layer line shows but i it's certainly it's certainly quite clear that um extra linear the extra linear curve is a better representation of what a given layer knows about a particular prediction and so we'll get to that point from from different different perspectives and angles one one thing i want to type in at this point about is that there's a question as to whether or not tuning or you know changing up the methodology or or doing some kind of fine-tuning or doing some kind of prompting is a is cheating or not um and kind of the approach that we've had in general is that there is knowledge that is inside the neural network and we're not necessarily trying to measure what exactly is going to happen if you query the network um so much as how much information or or knowledge of what facts can we extract from the network in some way uh which which i think is pretty meaningfully important because one of the main things that we're interested in with these with these networks is using them as a component in some kind of pipeline that is going to know a lot about uh you know about the world um yeah so absolutely so like to what extent are we cheating by drawing the orange line i think it's the that's the key question and and then i think that's like the the main thing that the rest of my uh slides really come back to from various perspectives so i think that that's that's a very interesting question and and that's one that i'll get to from different perspectives uh i i also have here results from from lambada uh comparing logic lens and extra linear uh extra linear fine tuned projection again you see the tuned projection doing substantially better like quite a few layers earlier um yeah i've looked at lots of different evaluation sets one of the things that is nice about this methodology is you can apply any benchmark from let's say an eval harness and you can get the scores at any given layer so um that's that's pretty cool um some of the benchmarks are noisier than others and so sometimes the uh sometimes it takes some time to understand and interpret what's going on uh so uh so let's get to another another question tuned lens the the way i introduce is it's reading from the residual stream we're projecting from the residual stream into into a prediction space so so what if we also tune all projections that right into the residual stream and what i mean by that is all the linear projections that precede the the residual addition so so let's look at kind of which parameters i'm talking about so here what i what here what i tried was i tried to tune the extra linear matrix uh and then i also tuned attention dense and mlp the the output matrix of the the multi-layer perceptron so basically we're tuning like this is like 40 of parameters in each layer and so now question is how much how is the curve going to look like is it going to be how much better is it going to be than the tuned projection that we tuned on the reading from the residual stream uh and the thing that was super interesting and i would say i think unexpected in a lot of ways is actually tuning these parameters does not get you any benefit at all so the way i'm interpreting that is is the model already writes optimally into the residual stream so ultimately all the matrices like the attention dance and mlp dance for h2h those are already optimally tuned and as long as you know how to read from the residual stream there is no extra benefit from from tuning those those matrices um and this is super interesting because now here i've tuned way more parameters than in in the previous than with a tuned lens with two lens i only like tuned one square matrix in the embedding space here i tuned like 40 percent of uh parameters in each layer and uh and there's no additional improvements i think that's that's super interesting to think about this and try to incorporate that into the mental model of how the residual stream behaves and so the the next question is like okay what if we tune all the parameters and i think that if i tune all the parameters i didn't get an improvement then at least something would run with the methodology because i think everybody expects that if you if you drop the model with a certain number of layers and then you retune it you should get a better performance so specifically with the tuned lens we see roughly a linear behavior across layers so the improvement in log loss is roughly linear across across layers um and you would expect something you would expect improvement that's you know behaves exponentially with respect to the layers so i i did this you know to do this experiment where i retuned the entire cropped model and we do get a curve kind of more along the lines of what you would expect these are more more finicky models to train so you kind of need to use different learning rates for different sizes and i i didn't quite do enough hyper parameter optimization in the middle so that's why that's why the curve is not as nicely rounded as you would expect it's a little buckled because i only use two different learning rates one optimized for the left and the other options for the right hand but you so you get this curve that represents how good can a model get in let's say six layers and so so you can look at how big can a model get in six layers versus if you have a 12 layer model what how how good of a prediction can you get out of the sixth sixth player which is a pretty pretty interesting comparison does this make sense so far any i get any questions at this point uh we can look at we can look at other benchmarks or comparing these curves i i won't spend too much time on that part um and so so i think there are three i have one question actually just on something you had a couple slides ago uh like it sounded like um the the model is already optimized for writing into residual streams uh but you know humans want to read in an interpretable way and it sounds like the model doesn't care about that part until maybe the very end is that is that what you're suggesting right exactly so the model already writes optimally into the residual stream but it's only i mean the the prediction well the the unembedding matrix is only optimized for interpreting the output of the final layer and so i guess i'm making the case that if you want to read from the earlier layer you need to read from the residual stream i'm making the case that that you tune this this unembedding matrix for that layer and that's the way you read from the from the residual stream so that's sort of how i'm thinking about it oh um oh i would so i would also add that um so when we're talking about tuning linear layers and if the question that you're asking is one of the inspiration one of the inspirations for doing that namely that a linear layer represents a change of basis so if you think about each layer of the network outputting into a slightly different vector space um and so you you have information that's being written into the result residual stream and as you go down the neural network some of that information is getting modified or changed but uh another significant portion of what's going on as you move deeper into the network is it's just changing how it's using to express that information um and so but kind of the motivation behind specifically looking at tuning a linear layer at the end is that we are learning a mapping from the intermediate state that the model is in after six layers to hopefully a vector space that is more aligned with the embedding space sorry an embedding space that is more aligned with the way the model is representing information at the end um and therefore ideally something that's going to be more compatible with like the the unembedding layer yeah i think that's just really well explained so so if you believe that the embedding after every layer contains a subspace that has the next token prediction and the subspaces can be different across players then it's a logical consequence that like okay how do we read from the subspace well we tune a linear projection to find that change of basis operation that's going to give us basically that prediction out of that particular layer because we've demonstrated that the basis is not the same across cross layers i think you could also turn that the other way though right because if you think of a linear model from embedded tokens alone this is something that was brought up in the last circuit's work that's that's basically modeling bi-gram statistics right you just go from the last token and give me a linear prediction of what the next token is if i gave you the parameters of that model it's not like you'd necessarily be saying that you're just reading out what the prediction was at the input because i mean you get some information from those parameters right you're still you learn something from getting from training those parameters it's not like you're just reading out the embedding space at that point there is some predictive extra stuff going into that as well so i think that is an important thought and and i do i mean i aim to address that in in future slides but absolutely so so linear uh linear linear transformation is is not is not nothing and you do have some parameters um i guess i'll i'll just make one comment here but really address it much more thoroughly later at least to the extent i have time because i don't think too much time here but uh you know even if even if the transform even if the prediction was encoded in like specific bits of the embedding like even if it was there the model would still have to know to extract it from those bits and they would still have to do do do do something but i'll quite a bit more thoroughly in in the next couple of slides so so let me talk through that and then you tell me if it was helpful um all right so so um i mean ultimately we end up with these three curves that i think are super super interesting if the logic lens the tuned lens and then the retrained model curve which is you know roughly um logarithmic and and so we can look at a qualitative comparison between these different methodologies i picked sort of an arbitrary prompt that was meant to test knowledge and grammar a little bit uh so it says at its peak the roman ire and if you and i looked at layer six prediction out of a 12-layer model um the logit lens kind of generated something so it's a roman empire's largest no reality tv based off duty it's kind of like some phrases there but not not great tuned lens generates something substantially better so edit speak the roman empire was the most popular in the world but this is the only thing we have seen it's a and then it kind of goes off the track a little bit and then the fully retrained model is clearly the best one uh and so so qualitatively you can see that it's consistent with you know logic lens not doing so great full retraining being being best and the tuned lens setting some sort of in the middle between the two sorry you say that the full retraining is the best qualitatively here because it still follows grammar like the content is still kind of strange for all of these right but now you know we are trained early tiny models here we are training six player models that's really small i mean to me at least the second one reads better i guess he gets into a subjective judgment i mean the the middle one kind of goes off track with the it's a big guy out the other way yeah i think at least i'm just trying to articulate what carries my qualitative impression i think it's just that the last one follows grammar while the other ones just kind of go into some kind it follows grammar but it also follows like it's talking about ruling which is you know it's tied to the concept of of an empire so i mean it's not giving you encyclopedic knowledge but it's sort of saying things that reflect some understanding of the general topic even though as far as to say that the for example the logit lens output looks a lot more like a sequence of somewhat randomly generated tokens um you know maybe maybe you sampled from from a giant text and you pulled out six tokens you wrote them down and then you sampled from the text again and you pulled down another six tokens and break them down that's that's the kind of process that i would expect to give a sentence like what the what the logic lens outputs um so [Applause] i did not touch that question i think we have someone that should try to mute [Applause] experiment that i was curious about is logic lens is effectively your projection is effectively how unembedding matrix would perform on the earlier layers so you can generalize this concept and look at how does a tuned lens on each layer perform on all the other layers and so i did this experiment and on the same small model you can examine how do lenses trained on one um on on one layer perform across other layers and so um i i don't necessarily have like a big fundamental takeaway from from this observation from from this this graph but i think it's very interesting that you can see how the uh basically how the space changes across layers so and when you tune a layer when you tune the lens on one layer it does well on that layer and maybe does okay on the on the layers next to it but it does consider but then that's progressively worse on layers that are for further away sorry i'm having trouble parties in this plot so you the task itself is to not just train to the output but to train to the representation of the other layers yeah yeah so so let me talk through it one more time or let me talk to it in more detail and more hopefully more clearly so previously what i show then we can go back to to this picture the middle line that we're looking here at here is we've for each layer we tuned a lens for that particular layer so like the the on the on the red line at layer six we've tuned uh a a change of basis matrix effectively to transform from the output of the sixth layer into the output space so we've tuned the lens to get the prediction out of the sixth layer but now we're asking the question okay if i take this interjection he meant to say green not not red yes i need to say yeah i meant to say green the the red line is is is is irrelevant i guess this particular part of discussion yep okay i think i get it so you're you're using the lens trained on let's say layer six throughout the network and seeing how it performs on all the other layers because that's exactly what i would argue with the one lens is taking the projection that was trained on layer 12 and we're applying it across all the all layers right and so i'm going to generalizing from from that and so then you get this so so the pink line in in this plot is the logic lens because pink line is the uh is the projection that was trained on the output of layer 12 and so it does best on layer 12. it does okay on layer 11 and then it does really poorly on early layers so it it this is sort of an illustration of how the face changes across layers this is this is the prediction place if i can provide a quick caption to this to this image because i know there's a lot of lines a lot going on so in the in the key where it says like lens five um that's referring to a basically what's happening in the logic lens but instead of being tuned to the output layer it's being tuned to the the representations in layer five and then so that's that's the green curve and so as the green curve goes through the plot we're seeing what the performance is of that lens that's been tuned on layer five when it gets applied to the uh to the outputs of other layers and the reason why it's these curves all hit the uh the black line is because mathematically uh tuning tuning an extra linear layer is uh yeah the tuning an extra linear layer uh at level f at layer five and then using it to interpret layer five outputs is the is the same thing that we're talking about um so it's it's like a mathematical topology that unless unless something's deeply broken about your optim about your optimizer it should be the case that all of these curves hit precisely the values on the black line and so the kind of really interesting thing is the the way that they bounce off of the black line as well as the shape of the curve before it so you can see kind of how how what what happens when you when you miss a line the uh the interpretability lens the with the residuals and how much that hurts performance exactly yeah thank you thank you for for explaining that uh you're absolutely right that the i mean the lines hit the black curve sort of by definition i i mean i use the same run to get the id because otherwise i have to just run the same thing twice it's like really is that is this tautologically the same thing as you said uh but but it is the interesting thing is how the colored lines bounce off over look from the black line and and perhaps the fact that you end up with you know something that looks like a line at the the bounce points something really fascinating about this plot personally is that they is that if you look at layer 12 they come out in the right order um so the first is the one tuned for layer zero then layer one then layer two and layers like four through seven all get kind of close to each other but they are actually coming if you read them top to bottom um in terms of their performance they are in order zero one two three four five six seven all the way down to twelve um they're coming out in in monotonic decreasing order um or a little more a little more clearly um if you if you tune a lens on layer k and then use it to interpret the output of layer 12 which is the last layer in this network because it's a 125 million parameter model then performance is the performance of that lens is is a monotonic decreasing in the distance between the layer that you tuned to and the layer actually evaluating on and that the fact that the difference there is what's determining performance is true almost everywhere in the plot there's like a couple areas where it gets a little messy and the lines crossed um but almost everywhere in the plot the the main thing that looks like it's determining the result is or the the loss in performance is the actual distance away in terms of counting layers that you are yeah that's super interesting that's a really nice observation and by the way i would bet that some of the places where the lines cross are because i i missed the missed uh run so like in x equals one the green line is crossing and now i'm looking at it i bet that i missed the point there i didn't have a measurement uh so but you know leaving that aside i think it's it's very close to you know very little line crossing given how many lines there are uh we can look at to look at a slightly larger model this is this is a 24 layer model and in the same general behavior it follows um the the lens train only on the input embedding behaves a little differently from the previous model from the smaller model but you know i'm sure that'll be interesting to to try to figure out why the why it sort of drops down for the final layer and how the final layer is a little bit more similar to the first layer the layers in between uh but but you know it does show you basically how this uh output faces out of the the prediction in layer outputs changes in sort of this mostly gradual way which is uh that's pretty interesting to you know to me uh on the leveling off uh afterwards i wonder if that's because the knowledge is simply retained in later layers i would assume that if it were to forget the knowledge then you'd see a decrease yeah so i think that when when a line bounces off there are two factors that are at play one like the later later layers no more but then the lens can't really take advantage of that and and the representation drifts so the uh so the lens gets there less effective in in in later their layers so i think okay it looks like we were internet did you lose me i hopefully okay all right i'm back here so let me wrap up with a few thoughts and i'll try to wrap this quickly so that we don't run over time too much and so i guess the first observation is that the next token prediction subspaces vary across layer and they vary sort of gradually and so you know we have basically output prediction encoded in the output of each layer but exactly in which subspace it is seems to change gradually the second thought is that we're using logic probing as a as a technique meaning that we are generating logits out of a given layer and this is something that enables us to run various language model tasks to evaluate performance at that layer um which is it's not something i've seen in in the literature you guys know a lot of the research better than me so but i think this is worth i think this is worth thinking about and uh specifically we recommend the tuned lens as the probing method uh and now i'll get to the i think the final thought which you know gets back to the question from earlier to what extent does the tuned lens reflect the true knowledge in in in a layer and um the the argument that i am making is the way to think about the knowledge in a given layer is how accessible that knowledge is to future layers now how do future layers access well they always access knowledge through some kind of a linear transform there's always a change of basis you either have the kqv matrices or you have the mlp pre-activation matrix that reads from the linear from the residual stream the future layers always read from the residual stream by a linear transform so they always access some subspace of the individual stream and so so the the claim is that accessing any subspace of residual stream is something that's free to future layers of transform and so because of that the claim is that that's why you can assume that the knowledge is known in that layer it is already encoded in a way that's effectively effectively free for the future layers to access and so i i tried to i tried to formalize that and i guess apparently i didn't paste that correctly uh so let me just uh let me just fix that really quickly um okay this is what i wanted to show can i mention that it's it is nice to see a live person on video but i think that might be hampering your connection a bit all right i'm trying it off it does seem a little smoother at this point but let's let's see how it goes thank you for for working with us so this is just my last last slide uh and i you know i tried to kind of formalize this statement i kind of just look at what i've seen empirically and try to try to look at the theoretical response and so when you look at also so let's look at for example how keys are computed in an attention head well you take the embedding from earlier layers hi and you multiply it by the key matrix for that attention head and now let's say that we had a lens that was tuned at that layer well then you can express the calculation of that then then let's look at the calculation of a key as an arbitrary linear transform of both h i and h i times the lens so basically we're saying like the key calculation accesses both the embedding and also the output of the tuned lens and basically just and when you go through the math you end up just with a different key matrix so you end up with a different key you know 6k ij but it's still linear right it's it's the as long as the model learns the appropriate it can attend both to the embedding and also it can attend to whatever the whatever the tuned lens transformation is and so so this to me is like a very interesting thing that uh the model does not need future layers don't need any additional computational capacity in order to access this information that's in this subspace the model still needs to dedicate a subspace of the keys to represent whatever information it's extracting out of the lens but that's sort of a logically necessary requirement even if we were extracting the information out of the key vector through identity through the even if we're extracting the information out of the hidden um latent state through identity that information would still have to fit into the key vector so so to me this this is the argument that i would make that that the tuned lens represents information that the future layers can access for free and so that's why i would say this is something that generalizes across transformers with nlp computer vision or whatever else you're using transformers for um so i'd be super interested to get people's thoughts on this uh get feedback and if you have ideas on how we can run experiments to either prove or or disprove this way of thinking about things i'm very interested to hear what you think yeah thank you so much for for walking through this stuff this is really great um regarding the slide i don't think i've necessarily parsed all the details yet but i don't necessarily think that thinking of it as thinking as a of a change of basis is for free still makes sense to me i think um just the the even the bygram model consideration that we were talking about before still kind of weighs on my mind because it's not no information that's encoded in figuring out what you need in order to understand what the next token is touch on that point the way i think about the the diagram model is that basically diagrams are hopeful that that the diagram knowledge can be integrated into whatever follows for free because after the way you read from uh from the digital stream is through a little transform and taking multiple transforms single linear transform and so basically the the diagram information can be encoded into whatever linear transform follows for for free and so that that's the argument i think i i think i can see that part but i think when you say for free you mean a specific version of for free like it's for free in the sense that it's free during once you've actually done the training but in order to figure out what the parameters are that you need in order to figure that out is still not no information anymore right and so i i think that there's i think that there's two slightly different concepts going on here um when when when when igor says that the information is available for free this this is is a different this is not the same claim as saying that tuning the linear layer doesn't provide any additional capacity to the pre-trained model like we we're talking about something slightly different here what we're what we're asking here is does is as the model trains right it gets so okay so as the model trains in the lower layers it has direct access to the inputs it has like the attentions that come out of that and it also has the intermediary computational results from the previous layers and so one question we can ask is as the model trains does it need to do additional work to extract information that it has figured out in previous layers if the answer is yes that means that there's that could have a couple different consequences one of which is that there there may be some way to to optimize the setup so that doesn't need to do do additional work to to recompute things that's already figured out um but what what what you are is is showing here is that mathematically speaking there does not seem to be i mean setting aside what it would mean to demonstrate this empirically because i think that like nobody has demonstrated something like this empirically ever for any neural network uh but mathematically speaking there is no particular reason to think that as the neural network trains layer 17 has to do any work to to access information from the previous layers which which is which is definitely not the same thing as saying that if we stick a a linear layer at the end and tune it and use it to extract information we're not adding information it's it's it's a similarly structured but but very very different claim does that answer what you're looking for that helps i think i need to think about that more to parse the differences yeah so yeah so the main the main question in this slide is kind of how how how accessible is previously determined information to layer layers of the network because we've been talking this whole time about how they express things in different bases and how you know we have to tune additional parameters and add them into the network in order to get human readable outputs that that make sense to interpret prior layers because the because the the unembedded matrix is only tuned to the output of the last layer and kind of the the argument here is that even though we as humans have to do that in order to get information out of the previous layers the the neural network mathematically speaking is already doing that work when it's being trained so that we we shouldn't think that it is uh particularly cumbersome for layer 17 to consult the the outputs from previous layers um and i think one reason why i think that it's important to think about this is that the way that like the the circuits work especially talks about the residual stream and writing into the residual stream kind of assumes that there is this body of knowledge that you start off with and then layer one modifies the body of knowledge and layer two modifies the body of knowledge and then layer three modifies the body of knowledge and if we make the claim that the input and output representation spaces are meaningfully different and that this makes them hard for us to to actually get data out of when we move between layers it suddenly calls into question whether or not you actually are are writing into the same residual stream or whether it makes sense to even sorry whether it makes sense to think about just writing into the same residual stream or perhaps writing into the residual scene in the same language because if layer one is writing with the layer one uh output embeddings in layer 17 is writing with layer 17 output embeddings we just showed you a plot that says the layer 1 output embeddings when applied to layer 18 do not give good results and so there's a important question here of is layer 18 actually able to read the stuff that layer one is outputting and so that's kind of the the context and stakes to to this slide in particular as well as kind of how it relates to what we were saying before okay i think that makes more sense yeah thank you i'll uh i'll ramp up here thanks for your attention i'll be super interested i will share the slides so if you want to kind of look at this uh stuff in more detail um it will be available and i'll be super interested to hear you know what folks think and uh if there are specific experiments or or you know thought experiments that we could go through to either either get either support or or reject this way of framing the problem and thinking about it thank you i'll head over back to stella hello can you guys hear myself yep cool i gotta pop saying the stream ended and so it's just a little worried uh so yeah so that was a lot of empirical results igor's been working on this for for oh like i guess two months or something like that now um that is is most of the empirical results that we have we're still trying to kind of figure out what we think the right way to interpret and think about these results are but we also have a couple separate um empirical results i kind of want to go over briefly that orza and i came up with um this is actually stuff that we largely did prior to deciding to to do this scaling model um but we weren't really sure what the best way to package it and and present with the world would be and then once i said you know we should we should start training the scaling suite and doing a bunch of interpretability experiments with it it became a pretty natural place to present this information so the thing that that we've been looking at is memorization in neural networks and something i observe just talking to people is that a lot of people oh let me share my screen now uh yeah so so here's the punch line the order of training doesn't affect whether or not a data point is memorized um so something i've observed talking to people is a lot of people have a kind of a mental model but the reason why memorization in language models and neural networks in general happens is something along the lines of the model starts off as the model trains it has a body of knowledge or or a set of representations that it has developed that explain things that have happened before and some new information it encounters during training is readily explained by this uh what's been developed so far and other things that encounters during training are not and you know if we're going to get information theoretic about it some new sentences that it encounters that are extremely different from some instances as ever seen before require more bits to specify in an information theoretic sense given the the the information the model has been exposed to so far and it can be a lot more efficient to simply memorize the that information rather than do all the work it takes to kind of re-build the uh latent representation space to have a particularly whoops that was around to have a particularly efficient way to represent that information and so kind of one piece of empirical evidence oh sorry it looks like that looks like you're hitting the slideshow button has probably changed the window setting or something but the stream has now closed oh wonderful yeah i just wanted to make it bigger thank you that looks great thanks um and so one piece of empirical evidence um this is this to be clear is not something that i'm claiming any particular one any particular researcher has published or something like that but this is seems to be a mental model that a lot of people have um that i've observed just from talking to them and before we started doing this this was my mental model um but i think a lot of what we've done is kind of calls that into a significant question but there is good computational evidence of this so for example um if you go look at what content tends to get memorized it tends to be stuff that first of all doesn't show up uh that is structured extremely differently from stuff that shows up elsewhere in the model so for example if you take a large quantity of internet text and you insert like a handful of 15 digit sequences of random numbers uh those 15 digit sequences are much more likely to get memorized than some kind of you know baseline sample from your training data um and carlini especially has done a lot of empirical experiments that have shown various things like this and kind of the conclusion that i have reached from reading a bunch of his papers and something that i know that he believes because i've spoken to him about the sponge is that kind of the the two big drivers of memorization of determining whether or not a particular data point is memorized is first of all how frequently the model gets exposed to it because especially with these large language models um once you've seen it twice you're already over fitting and so as you increase the number of times you see the same string of text um when you're training as you know a six or 10 or 20 billion perimeter model uh that has a really big influence on it because the model overvalues stuff it's seen before by a lot and then the other one is just like you know how different from everything else is the stuff that you are memorizing um and that's that's certainly not like the only uh factors uh we ex uh exploratory analysis has shown that for example some book passages are memorized and you know why does gtb2 have the first like two chapters of the second harry potter book memorized but not the third harry potter book memorized that doesn't make a whole lot of sense and if you go and you look they show up in there's definitely some other things and possibly just like some random noise going on but a lot of but from the qualitative analysis that exists it seems like a lot of the explanation of memorization boils down to those two things uh first of all exposure and second of all how difficult it is to predict a a bit sequence and so when we're thinking about how things change up for training the kind of natural assumption is that the order that data is presented in the process of training matters if we have this picture of the way matters specifically to memorization if we have a picture of the way that neural networks develop these kinds of representations that says it has a model of the world and it's incorporating data and some of that data is easy to incorporate and other data isn't but as it kind of goes on it's continuously trying to optimize the representation that allows it to explain the data that's seen um things that occur earlier in training are less likely to be memorized because they have had more time they've spent more time sitting in the set of things the model is trying to optimize its representations for and because the the model has has used it to has used that data as its prior to try to understand and represent other things that see um and kind of the i guess i talked to the first several slides um kind of the conclusion that we have reached through our experimentation is that this is just not true um specifically if you take a uh a sequence of of characters and you move it somewhere else in the training data whether or not that sequence of characters gets memorized like the probability of it getting memorized does not change um so the way that we've we've gone about doing this experiment is we've taken uh we we take the the contacts and we split into two groups we take a document from the training data we say okay we're going to take the first n tokens um i think for these experiments n equals 32 and we feed so we take it we take an actual training document and we take the first 32 characters and we feed it into the model and we ask it to predict uh the next 32 characters and we're not actually interested in the like the top one predictions or something but what we're interested in is the logits assigned to the true continuation so you have a six you have a 64 character sequence and you when prompted with the first 32 how likely is the model to what kind of probability does the model assign to the correct second half of that sequence and um well yes uh so for a dense small model uh there's a typo here this is 125 milli not 150 but not really particularly important um so this this is a bucketed histogram where the x-axis shows the index of the document sorry of the tokens in training um because documents have different links the fact that uh i the fact that these are token indices not document indices is important um and what you what you see and so what this shows is a is a bucketed histogram um i think there are 20 buckets here um and this is the uh 75 range um as well as the lines between the medians and if you look at the the y-axis um this is a really small variation we're going from from negative 7.14 to negative 7.19 um as the total variation across uh you know 3 times 10 to the 11 tokens in train um and indeed if we look at the the fit regr the reg fit regression line um for predicting uh negative log loss as a function of the token index the there's this y-intercept term and the slope is 2 times 10 to the negative 15. that that's about as strong of a statement as absolutely no correlation as you're ever going to see and then we did it with a 350 million and we saw the exact same thing you'll notice that it shifted downwards um we're now at like negative nine um larger models tend to memorize less which is another kind of implicit point in favor of the idea that that representation capacity and and having good representations is something that influences memorization but the slope here is still 3 times 10 to the negative 14. and if we go up to a 750 million parameter model um we've dropped even further we're now at negative 24. but we're still at 2 times 10 to the negative 14. um and then you can see here is a plot of the regression line so you can kind of see how they're spaced out um so first of all it's it's fascinating to me that the difference between going from like 150 million parameters to 350 million parameters versus 350 million parameters to 750 million parameters is ginormous because uh and actually goes in the opposite direction that scaling laws would tell you to predict in general scaling law says that if you plot things on a log log plot um performance increases linearly with scale uh this is not a loglog plot however uh i've done the math and i can tell you that if i if we were to replot this analog log plot uh the green line is closer i mean you can see this kind of numerically like the the green line is is closer to the blue line on a log log plot then it's the red line um in terms of the perimeter space uh sorry make sure i'm saying the right thing um i i do but i want to get more data points and this was kind of one of the reasons why i said okay we should just train a lot of models um because at the end of the day this is three data points and the hyper parameters for these models were not picked perfectly and they were not picked incredibly consistently um so these were three pre-trained models that i had lying around because i was testing the gtb neox code base um one thing that i that i really want to do and one one thing that's part of the this idea of training a scale a scaling suite that i think is really important is using uh really precisely chosen values for all the hyper parameters and making sure that you are trending on the exact same data in the exact same order that you are you know that all of your parameters are scaling linearly on a log log plot um that you you are following kind of the skill the scaling laws best practices let's say uh how's the horizontality of these lines related to your claim that the order of training doesn't matter okay great question so if if earlier tokens if earlier sequences are more likely to be or less likely to be memorized exactly than later in counted sequences we would expect to see that the the negative log like the negative log likelihood that's a tough word to say of latter sequences is larger we would expect to see a positive correlation here but the x axis is not the sequence index but the token index yes entirely so to the token index hurry can you say that again the sequence index is entirely orthogonal to the token index right you have a bunch of files and uh each of those files can be imagined as a line and now you're looking at how the performance varies as you go along the line rather than as you go through the lines oh sorry i thought no worries so the the the question i have here that i think that this plot is a negative result for is is it okay we have the pile it is uh 1.2 terabytes of text it has been tokenized it has been shuffled it has we've decided that we're going to train for one and a quarter epochs um is what i believe these models have been trained for great so we we have 300 and we have uh we have you know three times 10 to the 11 tokens and we're going to feed them in one by one auto aggressively and start training them and the question is is if you take a span of tokens that is in the first 10 of that 3 times 10 to 11 set of tokens and you take a span of tokens that is in the last ten percent of that three times 10 to 11 span of tokens i see i'm sorry i'm more likely to be totally i thought this you got it okay i thought the token in the like i thought you projected the entire training set onto the relative position within a file but what this is you have concatenated all the that's right yeah files we're concatenating all the files um so the reason why no worries the reason why i was speaking earlier about um you know the first however many tokens without how this this is honestly a really good point of confusion because this is something i skimmed over that is intellectually important and can get confusing the reason why i we looked only at the first 64 tokens within any particular file is because files have widely different lengths and we wanted to make sure that the points that we were examining were statistically independent as as best that we could so if i have a very very very large file um so so we didn't we didn't do literally every token um because that would have taken forever we did a lot of tokens i don't know if worse or maybe slides not me um i don't know if he did not seem to have written down the number i don't remember was last time i had um but we we we didn't look at literally every every token we took a sample and what we want is that the the samples are statistically independent in the sense that um sorry that the we want that we would like to control for as much uh potential correlation between these tokens as we can and one of probably the most important sources of correlation is that if you have a very large document and you train on on all of it but you sample multiple times so you sample from say the first 64 tokens in the in this textbook and then you sample from another set of 64 tokens that are like 100 000 tokens later in the textbook the fact that they're coming from the same data source and show up consecutively uh and it can introduce a statistical bias into it and it seems very plausible to me that the probability of measurizing a passage later in a in a text file is dip is like probabilistically dependent upon the random variable that says did i memorize earlier parts of this text so oh it's just gonna say they don't have crazy like i don't have particularly strong evidence of this it seemed like it was likely um and it seemed like the easiest way to control for this was to simply make sure that we were never sampling from the same document twice every 30 seconds sequence or every 30 second record of this was actually taken i mean evaluating oh this is this is every single sequence of tokens as in uh the pile if you consider it to be a set of records each having 2048 tokens and every 30 second record of it was taken for evaluation not oh okay i thought so tokens one through uh yeah i think the token sequence thing is maybe different but you're just saying that that files were in some sense sampled uniformly across the length of the documents right yes yes that is that is that is the the moral of the story i'm trying to get to so i i think i have mixed up some of the details that was the goal was the training data shuffled before training uh the training data was shuffled before training and is in the same order for all three models so it was shuffled once and then each model was trained on it separately so one thing i i looking at these numbers and looking how flat things are i actually find these i find this very very hard to believe actually um how flat uh the numbers are both in terms of like the scale and the uh and and just the fact that there's almost that we're seeing essentially almost no effect um across time i mean you show it yourself with the regression line it's basically you know it's it's 10 to the well it's basically nothing yeah no 2.25 times 17 out of 14 is actually stored as zero in in many contexts yeah i mean to be fair the token index is is is goes up to three to the power of 11 but i i i get what you're saying is it's it's a very it's uh it's almost completely insignificant um and again i find this i find this incredibly difficult to believe um one thing i i'm curious about um is how does the loss look like oh actually these are only these are only up to 750 million parameters i guess um and you and the data set we're dealing with is how big uh 3.5 times 10 to 11 tokens um right i'm sorry i'm really bad at these conversions uh i mean this this is like full gtp 3 training data size it's the full pi it's the full pile um more than that yes um and and we're saying the bigger numbers generalize well the other the other thing i guess that's weird is the is the is the fact that the magnitudes here for the different models are are different so the the the kind of base rate the negative on average is is different though the extent of the variation is actually pretty similar so here the extent of the variation is .05 um here it's point zero five here is point zero six um and you know that's that's definitely a very rough guesstimate this is literally like matplotlib choosing what scale to use to fit to the image um but you know 0.05.05.06 is the tick labels that matt went to put on these so the the inter the variation within a particular model is not really changing very much as you scale despite the fact that the actual overall value is changing dramatically and so one one one consequence of that is that if you express this as a percentage you know what percentage of the negative log i don't know particularly if percentages of negative log loss is a very meaningful thing to look at let's pretend for a second it is if you're asking yourself what is the percent variation in the negative log loss um for then small okay and you ask yourself okay what's the percent variation for the large model those are like in order of magnitude different from each other uh one when expressed as percent change despite the fact that the actual variation is staying the same because this is more than doubling how far away it is um so what and oh i was going to say the the the negative log so we're actually we're seeing the bigger just to double check again to make sure i understand what's going on here we're seeing that the the bigger models are actually doing worse so they're assigning less probability maps to the uh to the previously seen strings than before is that correct the the larger models are doing worse at the game of given 32 tokens predict what the next 32 tokens are in the training data so in terms of kind of the capacities that we would like language models to have i think most people would say that it's doing better that we don't want it to be able to predict the next 32 tokens um from the previous 32 tokens because that would reflect too strong of a of a understanding of the distribution of the particular training text that's seen notably this is not a skill humans have if i was to show you 32 consecutive characters you would not be able to tell me what the next third you know if i write down 64 characters and i show you the first 32 of them you are not going to be able to guess the next 32 characters in my sentence correctly the overwhelming majority of the time and that's because language just has a lot of variation there are tons and tons and tons and tons of correct continuations in the sense that like they form very reasonable syntactically and semantically valid sentences that i could have written down in order for you to guess the next 32 tokens gradually you need to get extremely lucky sorry or or i need to be doing something trite you know if you're putting shakespeare or something you'll be pretty good at it but in general what what is the y-axis here just want to double check the the negative log loss of are you sure it's negatively not log because like it's negative yeah yeah that was just the other thing i was going to ask um yeah so that would be then it would be the other way around then like then small would be like the best memorizing which seems wrong no that's what i'm saying yeah the the small model is the best at memorizing i think that that is not unreasonable i think that as the model gets larger it develops um more or so actually let me rephrase that um it's not saying i'm not saying that the the small model is the best at memorizing in the sense that like there are long passages for example that's memorized i'm saying that the the average extent to which it memorizes a particular sequence of token uh decreases um but now i i i do not know what the difference between those two sentences are like there's a very reasonable question as to whether or not those are the same thing or not but um yes this is saying that the the the probable the likelihood that the model assigns to the correct continuation is going down as the model gets larger which is i just want to add that um this is like the opposite of what i saw i i've seen with the open ai models like comparing babbage and curie and davinci uh like davinci has memorized things to a much greater extent than the other models so i wonder if this changes as the models become bigger like maybe we're before the like double descent point or something but i'm not sure i'm really interested to see how this changes this or how this looks as you do the scaling sleep i'm just going to say i'm confused actually like this i am not expecting the the data that i'm looking at right now i if i'm interpreting it correctly i i'm not expecting it um frankly um i i i would expect that the bigger models would be able to get better log likelihoods or assign higher probability mass to the target data regardless of whether they memorized or not because they should be able to leverage um you know patterns that they learned and stuff like that to assign higher probability mass to the next 32 tokens um so we figured the bigger ones do better so we've got one three uh i'm just going to try and project what it you know we talk about large language models as chameleons they're capable of being many people at once many things at once and maybe they are all memorizing but in order to get the memorization you need to prompt them so a larger model is memorizing even more but to figure that out yeah that is that is uh definitely i think a very reasonable um thing uh these these are these results are are pretty solid and since like we've we've replicated them um on this on models of this scale but i think that they're very preliminary if you want to draw conclusions about you know g2 3 or gp neox or anything that's not like in this range of models um and i mean i yeah i i think we absolutely need to look at more models across more scales that we need to make sure the hyper parameters are are tuned to be as consistent as possible and we need to do very things like very how many tokens of input prompting we're giving it and how many tokens of of output prediction we're expecting to have so yeah hypothetically it seemed it could be possible that uh a model is always able to get the next like a specific model is always able to get the next 10 tokens but is never able to get the next 20 tickets but like it actually assigns a probability of 0 to the correct 20 token continuation but it assigns a very high probability to the x10 token continuation and that that is a axis of variation that we have not really experimented with at all um the best i can say is that this is definitely not tuned to give a negative result we picked a thing that was primarily driven by computational limitations because this is very expensive and we happen to get a very consistent negative result but we should absolutely vary those things to make sure that we aren't accidentally cherry picking for a negative result i observe that these lines are so flat that you could have gotten like at the flatness by doing a lot of less comp computation like um you can like a simple way to reduce the amount of computation that you put into this is by taking only a random subset of the training data like shoot out nine out of every ten of them at random this should give you like already enough to say that the line will be flat because then you know that the variation will be like smaller than 10 to the minus 10 where the full amount you get that it's 10 to the minus 15 or something um given that i think it's uh like given how absolutely flat this is perhaps the effect is strong enough that you will still see it perhaps even this strongly if you do not do just the first 32 tokens because like if like this is so lawful that presumably the law is simple and we should find the simplest architecture where the still holds for example whether this still holds for mlps rather than transformers even or something yeah that's related to a thought i have i think a lot of the mental model that you've talked to in the beginning probably comes from literature on catastrophic forgetting from way back before transformers so it might just be a different regime or something yeah that's that is fair the overwhelming majority of results on you know everything from catastrophic forgetting to to memorization is is not on transformers um there's like six papers five of which were written by nicholas carlini um that are actually about transformers with more than 500 million parameters um so this is in that sense pretty unexplored um in the particular context that we are that i mean that i i and i think most people in this in this call are deeply invested in sure would it be cheap in depth and compute to uh redo the that small experiment with uh the entire trading set um i would have to i would have to do the math um i i can say that we have been getting a lot of offers to by people who think that luther is doing cool stuff who want to give us compute and that a lot of these people don't really understand what kinds of of setups you need to train like a a very large language model but you need you know high quality interconnect and stuff like that but um while say commercial cards are not very good for training large language models um if we can get access to a large number of commercial cards uh you know there are definitely accessible ways to kind of massively parallelize this and this is like 100 parallelizable you don't need to do it in any particular you don't need to do the analysis in a particular order once you have the trained models you can literally run each inference on a separate card if you have that many cards um so i am i am very interested in kind of trying to figure out how to scale this analysis in different ways and i do think that there is a way that we can make that cost effective whether it's uh increasingly using subsampling the way i think it was gurken glass who said that or whether it's simply like you know finding people who are willing to let us borrow like 20 80 gpus or or other kinds of commercial cards that are a lot more accessible than a100s uh but which historically speaking ulluthera has not expressed a huge amount of interest in because we can't use them to train large language models what probability do you predict that when you that if you were to rerun this without the adjustment by looking only at the start of every file you would get the same flat line hi i would i would i would bet five to one odds in favor that that's like a literal offer if anyone wants to take me up on that like i i would straight up you know bet a hundred dollars on five to one odds on this [Music] i agree so i guess one thing i want to take a look at um i don't know if we have the resources for it uh if it makes sense or if it makes sense to take a look at but i want to see because this from what i'm gathering we're looking at the average memorization from like looking back throughout the trade the training run uh is that roughly correct yes that is exactly what we're doing um one thing i kind of want to see is i want to see the memorization of a single data point um throughout the training run so i want to see you have one sequence and then about like say uh you know at the start or midway through the training run or whatever um the model sees the data point and then we plot the log likelihood as a function of as a function of time essentially um throughout the training process uh probably would be cool to do it like multiple time steps right after uh it sees it and write a like right before and right after uh and then maybe like with increasing step size after that but i think that would be something really cool to see to see if like how much does an individual data point actually move things because from what what i'm looking at here it seems like the answer is the suggested answer is it doesn't but i somehow find that surprising so i think that's a really great question i think they're i'm i'm not 100 sure what you mean mostly because i think there's a couple different ways to interpret what you're saying one of them is we can say okay let's stratify this and find let's go find the 10 most memorized data points in the entire data set if we plot their locations are there locations evenly distributed so it's it's possible the average memorization is constant even if the one the location of the one percent most memorized tokens is not um and that's that's something that we have uh that we have not yet done and there's something that i do want to do another thing is you know some of these tokens show up multiple times in training other ones don't well actually almost all of them show up multiple times in training uh but some will show up twice in training and some of them show up ten times in training uh no someone showed up once in training and someone show up four times in training there we go uh i know how this data was processed i was definitely there and paying attention to what we were doing uh so given the evidence that uh the number of times a data point is seen influences how the extent to which it's memorized we can say okay what if we we take the subset of the training data that's only seen twice and plot that and then plot as a separate line only the data points have been seen three times and then plot as a separate line only that as points have been seen four times and what does what do those curves look like um and for for this specific question like you know if if you strike but if you are interested in position within training but you strike by number of times you see it you see the data point um i don't know the answer but if you are are solely interested in like what the extent to which seeing more data points influences the memorization there's actually a really nice scaling curve um for that there is a paper that is that i have seen a preprint of uh that is being developed that is currently under review at some conference whose name i forget but one of the ones particularly stringent and anonymity requirements and so they weren't able to put a preprint of it online but there is some some really interesting uh results about like sk scaling laws for memorization in the sense of uh how does the influence of um how does the extent to which a data point is memorized uh get influenced by number of exposures there we go i think that was a coherent sentence yeah so i think what i was trying to get at is if you take if you have like some unique and and you can like you said there's there's some things where it happens once some things where it happens multiple times but you know say you have just a unique string that shows up in the data set once um you know just for the sake of simplicity you can find maybe some that show up multiple times but say you just had a unique string that shows up once in the data set um what i'm interested in essentially is how does the language model react like as you snapshot the training at multiple different uh steps throughout it uh how do those snapshots react to uh seeing the the uh the the data point essentially um so i would expect to just surf memorized and then become less memorized yeah exactly right so like what i would expect to see is i would expect a sort of a v-shaped profile where initially the model is learning to use generalization to figure out the the thing that it's never seen before which is the data point and then when it does see it i would expect to see something like a small drop in its uh you know in the in the loss essentially corresponding to assigning more data to something it's seen before and then after that i would expect it to crawl back up as that gets progressively overwritten uh by more and more data that comes in um that is kind of my intuition for what i what i would expect to see so first of all i agree 100 with your intuition that is also my intuition for what we'd expect to see um there is a sense in which we are seeing part of that plot um with the curves that i that i can show you right now which is that we can see from the evaluation index of being although at the end of training uh the the various points along the curve for earlier uh data points and i think that this implies this would be a lot more easy it would it would be difficult to um make consistent what's the word for that it would be difficult to expl simultaneously explain this and a very strong v shape in my mind um like i don't think i agree or particularly with each other and so we came across this oh yeah so when when we yeah that's why i said i'm prioritizing yeah that's why that's why i agree with you completely that is that my expectation i would see something like that and we're looking at something that says that's not at all what happens uh so i think that that's kind of the main thing that that jumps out at me so i think uh it's when you think about a chameleon here when a chameleon is not surprised like the chameleon is both a democrat and a republican and when the republicans see something that surprises themselves you know just give that information to the democrat don't be surprised um so there might be a way that the chameleon nature of the lm explains this sorry could you unpack that there's not just one person there's a lot of it sounded more like a joke than yeah you see some tokens that surprise you there's not one person in the model you know somebody is not surprised by seeing that token and and if you are still surprised you create another person so uh who wouldn't be surprised so like the capacity of the element is a lot bigger than thinking that it's one entity that is either surprised or not surprised what you're trying to explain here is why the performance does not depend on the time it's been since the model has seen a piece of data so i think maybe to try to try to interpret uh the claim there it's that um language models don't have a single consistent what's called point of view um if you generate from a language model many times you will obtain extremely different continuations if you ask it like a like a factual question even um you'll or especially with if you as you vary prompting and kind of as you vary the way that you're asking the question um you you see very different types of answers um and sometimes you get the impression that the model knows the answer to a question sometimes you get the impression the model doesn't know the answer to the question and so there's there isn't it's not like there's a person inside the language model answering things it's a lot more like there's a crowd and that you are getting a random person from that crowd to answer the question each time you ask it and so if that's how you're thinking about the way that language model generations come about you might say okay so maybe some of these uh people living inside the language model that that are producing the outputs um have things memorized have a particular faction memorized and others don't and that as you kind of do this and sample randomly from the from the model only once and uh and you know for negative log likelihood there's a pretty large explicit average that we're computing over the probability of generation maybe the the averaging across these different um i think points of view is what david called it um cancel each other out and so on net there is no signal even if uh there is some kind of more complicated thing going on under the under the hood you wouldn't expect for a large mixture of models to have them all average out to exactly nothing like this you would expect if all of them are like nothing and then one of them is something else you would see that and even if they are different they wouldn't be exactly opposite to each other um so one thing i'm curious about is uh like this is sort of like the very dumb sort of memorization where you just show it a bit sequence and it predicts the next 32 tokens but uh is there like a more natural language sort of memorization where you can like ask it a factual question in natural language and depending on where the text that explains the concept even if it uses different tokens uh shows up in the training data does that change so yes but i think that doing that properly is kind of dependent on the probing results that you are first of all yes it is whoever interjected that is exceptionally difficult um being able to do it in a in an effective fashion is probably dependent upon very powerful probing results um like the ones that igor is trying to find um i don't think that we have the capacity to do that experiment in in a reasonable fashion today like i just don't think that it's known how to do that oh sorry go ahead but what i want to say like i feel like that that's also extremely uh it's gonna be you're gonna see a large amount of variance down to prompting they all could come down to prompting and so like how you prompt for the answer is get a dramatically change how it matches what it's seeing and so i i would not be surprised to he see that this is just so noisy that it's not a really relevant result or anything conclusive is what i should say yeah i was thinking maybe you could do something like putting uh you know putting fake facts at at known points into the training data so that you sort of know what you're looking for um and then and then trying to fish them out with prompting um i guess the reason i i'm sort of really strongly expecting this to work or like to show difference across the training data is sort of this picture of like learning how to learn that people have about gpg iii that would require for every additional fact your input to to retrain the entire model or perhaps like to retain it once once you've um input all of those facts but that brings me to like i would have expected that in order to check whether the training order like data order matters you shovel the training data and train the model on the new order as well and then see if they agree do you think they would agree uh that is a great question and that is an experiment i plan on doing um i i have personally been kind of busy for the past month um training a different large language model uh but like no i think that's a really reasonable question to ask and i think that we should train a model and see if that happens all right you could think it's a good idea we could then do like arbitrary uh inquiries into how they behave and then check whether they do the same thing such as asking them about fuzzy facts that we really don't know when they learned them yeah um i think i think that in terms of reasonably accessible ex follow-up experiments um the the two things that i am personally most interested in doing next are shuffling the entire training data and re-running everything returning the model re-running this this computation and seeing how much the results change um and then well hopefully the other one wasn't really interesting because i already forgot i guess the other one wasn't really interesting because i've already forgotten what it is there's definitely something else i wanted to mention as a particularly promising follow-up experiment but i have absolutely no idea what i was thinking uh also i i guess like this is kind of going back to uh igoro's work um but one thing i'd kind of be interested in looking at is the logic lenses that are being looked at um has anyone kind of looked at i guess their dimensionality like how sensitive are they to how sensitive is the predicted token to movements in the parameter space in some directions versus others and what's the ratio of uh of those directions that matter to the out like predicted token versus the uh directions that don't matter i guess so it sounds like you want to do pca or or what is that something you want to apply it to i guess it would be the the residuals uh between the different layers um so wherever we're inserting the logit lens essentially um one thing to look at would be uh you know if you know just for a simplification assume that it's a assume that we're just the lens is actually just a linear layer what is the null space of that or like the approximate null space of that linear layer and what is it what is its you know the opposite of that the space uh that is directly um you know that directly affects the output of that linear layer and how big are those two with respect to one another or is there even is there even that space like what are the what is the distribution of eigenvalues so to speak of the uh of the the matrix or maybe some approximation to it um that is being learned by the logic by the logic lens yeah so just a couple of quick thoughts on that i i did look at the tuned lenses can we interpret what the lens wood lens is doing so if we tune a lens on the output of layer six can we in some way you know can we look at it i try to like visualize the the matrix the matrix i try to look at the eigenvalues i looked at some in various ways but i did i it didn't it wasn't trivially interpretable like basically you saw some factors that had the most impact and then there was sort of gradual long tail of uh you know of other factors and so so uh it it it seemed like the the space is complicated and you have to have the way i ended up interpreting it is there's uh there are various signals that you have to sort of subtract in order to get the prediction out and so i wasn't able to get something that's that's trivially interpretable so it wasn't like oh there's you know this there's one eigenvalue that corresponds to the prediction of the next token or anything like that did you share the matrix across the layers uh this is where this is in the model where you train a matrix for the output of each layer so there's a yes okay so you don't share them have you considered um trying to share it and then seeing how much the performance varies in that case to see whether the dimensions even if you cannot interpret the mean the same thing across layers uh if the performance doesn't really i did show those cards right so so i did show the chart where you train online there and how does it perform on all the other layers that's oh sorry then i misunderstood that sorry yeah yeah but that's exactly what the intent of of that or how lens from one layer performs on the other layer but at least able to kind of really trivially interpret what's in those transformations okay i think this is a good time to at least call the official end of the presentations thanks so much guys for uh putting the stuff out there this has been a lot of really good material and i hope it's given people a lot to think about i'm gonna stop the recording here um but yeah hopefully see you next time and feel free to hang out for as long as you like
22f8e81a-06b8-4e46-90dd-579aea868b65
trentmkelly/LessWrong-43k
LessWrong
Re: sub-reddits A while back, I polled the community on the possibility of subreddits. Most people said they wanted them, and I said I'd investigate. I talked to a couple of people and eventually ended up talking to Tricycle, the developers of this site. They told me about their own proposed solution to the community organization problem, which is this new Discussion section. They said that searching the discussion section by tag was equivalent to a sub-reddit. For example, if you want a sub-reddit on consciousness, the discussion consciousness tag search is an amazing imitation I told them I wasn't entirely convinced by this and sent some reasons why, but I haven't heard back from them lately and I'm not going keep pursuing this and make a big deal of it unless a large percentage of the people who wanted sub-reddits are unsatisfied.
0d174285-1ddd-483d-9293-f0faf6d31960
trentmkelly/LessWrong-43k
LessWrong
Shed Wall Plans Last week I wrote about estimating whether insulation was worth it for our shed if we're going to use it as a home office. While I'm waiting for the carpenter to put a new roof on it, I'm thinking through what finishing the walls would look like. The Facebook side of the discussion on the insulation post was very helpful, and got me thinking about putting closed cell foam panels directly against the concrete walls. Because it's both a vapor barrier and efficient insulation it reduces the risk of condensation happening between the concrete and the insulation. I'm less sure about what comes next. Whatever I'm doing is not structural, and there are a lot of trade-offs. All of the options involve the layer of closed-cell foam directly against the concrete, but then they vary on how they're finished. Standard options: * Build a 2x3 wall, then drywall. This gives room to run the electrical, and wood to attach the drywall to. The wall would be attached to the subfloor below and roof joists above. Nothing needs to go through the foam to the concrete, and the wood framing makes later work easier. * Build a 2x4 wall. Same advantages as a 2 x 3 wall, but now there's enough space in the cavity that it could be insulated. I could either use this for a larger amount of insulation total, or reduce the thickness of the foam.(video example) * Furring strips. Run strips of wood horizontally along the wall, and attach them through the foam into the concrete. Then run a second set vertically, 16 inches on center, to attach the drywall to. There's room for electrical as long as you dig out a bit of the foam for the boxes. (video example). Less conventional: * Verticals inside: Instead of using furring strips, put vertical 2x3s from floor to ceiling, 16 inches on center. Attach them a the top and bottom (roof joists and subfloor). Attach the drywall to the wood. Run the electrical through the ceiling, and then down into the appropriate bay so you don't need to drill through any